The short answer is ‘A lot more than many professionals currently think’.
To start, though, the basic facts: ISO/IEC 27001:2013 is the first revision of ISO/IEC 27001. ISO/IEC 27001:2013 specifies the requirements for establishing, implementing, maintaining and continually improving an information security management system (ISMS) for any organization, regardless of type or size. Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed. ISO/IEC 27001:2013 is not obligatory in most jurisdictions, but the standard does provide much-needed market assurance. An ISO 27001:2013-certified Information Security Management System (ISMS) gives the market confidence in an organization’s ability to look after information securely. Confidence that it will maintain the ‘confidentiality, integrity and availability’ of customer information and as a result, protect its own and its partners’ reputation.
What is the underlying purpose of the ISO27001:2013 Standard?
Put simply, the ISO 27000 family of standards helps organizations keep information assets secure. They help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.
ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS).
Whereas in the past, government and large organisations required their suppliers to be ISO 9001-compliant, now those who provide lucrative contracts are also looking for assurances from their suppliers with regards to ISO/IEC 27001.
Large-scale enterprises have a duty of due care to preserve the security of the information in their custody – increasing founded on legal requirements for Data Protection. If that information is shared with a supplier, then the company would be failing in its duty of care if the supplier’s handling of that information was inherently insecure for lack of adequately defined policies, procedures and controls that form a management system. Whether the company chooses to do this for reasons of governance or market assurance, the pressure is mounting to do the right thing even if the cost of standards compliance seems high. Therefore, increasing numbers of organisations are choosing to adopt ISO27001:2013.
Is your Information Security Management System (ISMS) ISO 27001:2005 compliant?
It will need to be if you are to achieve UKAS-accredited ISO27001:2013 certification in the year to come.
One year after publication of ISO/IEC 27001:2013, the IAF has issued a resolution stating that “…all new accredited certifications issued shall be to ISO/IEC 27001:2013″. [See: Transition to ISO/IEC 27001: 2013 – Updated June 2014, UKAS]. This means that UKAS-Accredited Certification Bodies CBs have not been issuing any new accredited certificates to ISO/IEC 27001: 2005 since September 2014. Organizations that previously complied with the requirements of ISO27001:2005 are required to transition promptly to the 2013 version of the standard, and transition audits will be carried out at the next scheduled visit to each certified client. It is time to embrace the changes in ISO/IEC 27001:2013.
So what can you expect from ISO27001:2013 that is different? Two basic changes need to be understood straight away; they are:
The ISO has determined that all new and revised management system standards must conform to the high level structure and identical core text defined in Annex SL to Part 1 of the ISO/IEC Directives. Conformance will mean that management system requirements that are not discipline-specific will be identically worded in all management system standards. This change will also apply to the much-anticipated revision of ISO 9001 Quality Management System standard when it is published in late 2015.
The ISO also decided to align ISO/IEC 27001 with the principles and guidance given in ISO 31000 (risk management). This is good news for integrated management systems as now an organization may apply the same risk assessment methodology across several disciplines, including information security risk. The asset-based risk assessment in the 2005 version of the standard required the identification of asset owners both during the risk assessment process and as control A.7.1.2 in Annex A.
The 2013 revision doesn’t have this requirement and only references asset ownership as control A.8.1.2 in Annex A – about which, more later. Although the A.8.1.2 Ownership of Assets control says that “Assets maintained in the inventory shall be owned”, ISO27001:2013 allows organisations to choose the risk assessment methodology most appropriate for their needs. The identification of assets, threats and vulnerabilities as a prerequisite to the identification of information security risks is no more!
The 2013 version says that the organization shall define and apply an information security risk assessment process that:
a) establishes and maintains information security risk criteria that include:
The information security risk assessment should produce “…consistent, valid and comparable results”; identify risks associated with the loss of confidentiality, integrity and availability for information within the scope of the ISMS; and, importantly in consideration of the changes, “identify risk owners”. Analysis and evaluation of information security risks are also required, including determining the realistic likelihood of a risk occurring and the levels of risk posed. You are required to compare the results of risk analysis with the risk criteria established in 6.1.2 a) and prioritize the analysed risks for risk treatment.
In Part II of this post, I will look at the following considerations:
How to apply the CIA requirements mentioned in the standard at business objectives level. In particular, at how to conduct a Risk Assessment on business objectives at a high level that will drill down to the actual risk present at the information level, taking account of the Infosec objectives that are fundamental to the control of specific vulnerabilities and threats.
This guest post was written by Michael Shuff. You can email him here.
Find out more about Cognidox Document Management solutions for ISO standards-compliance by downloading our Information Security white paper at http://www.cognidox.com/cognidox/view/VI-403566-TM
We’ve made a new release (1.4) of our OfficeToPDF open source project and pushed the code to its usual home on CodePlex (http://officetopdf.codeplex.com/releases/view/129209).
Apart from a general improvement on stability and exception-handling, the latest version can now support PDF conversion of additional file types:
It also added new flags such as /markup to allow document markup in the PDF when converting Word documents or /pdfa which creates PDF/A files in supported applications (Powerpoint, Word, Visio & Publisher).
If you measure popularity as a function of number of downloads and user reviews, it’s true that OfficeToPDF isn’t the most popular of our various open source projects. But it addresses a very specific need and user feedback gives us the impression that it’s considered very useful.
We started OfficeToPDF back in the day when fewer applications could save as PDF and we wanted a tool that converted documents via a command line utility on the server rather than on individual desktop / laptop clients. We had an integration then with a leading third party PDF conversion and assembly tool. The USP for that tool was that it could convert from a very large number of document types, and it could be linked to ECM products.
But, the majority of the types supported were never actually encountered by our users, and we were primarily interested in integration with our CogniDox tool. The company seemed to go through a business model change and the most obvious impact of that was a hike in prices. The cost per server now started in the $20K to $24K range, and the annual support was considered high as a result. Most of our users stopped using it and switched to OfficeToPDF instead.
Our house rule is that if any of our CogniDox technologies can stand alone and serve a purpose independent of the CogniDox application, then we open source it. As a conservative estimate, we’re ‘giving away’ at least $10K of value in this software. We still get the occasional request from people to buy a license, and they seem a little confused when we send them the download link and tell them it’s free.
The fact that other developers can freely integrate this code into their process tools and adapt the code to their needs is just as important as zero cost.
Maintaining open source projects when you are busy working “to keep the lights on” isn’t always easy, and it takes a well-funded project to build up a sizeable developer (as opposed to user) community that can help. It still feels good to do it, in the pure spirit of open source development.
It’s one thing to ask whether companies truly trust their employees with company information, but I think most would agree that trusting their ex-employees is definitely not desirable.
I was thinking about this while closing down the logins of a recent leaver on our various SaaS accounts. The internal systems were relatively straightforward – it’s all controlled via a directory service so one inactivation command disabled all logins to our tools.
But, like many companies out there we’ve signed up to various ‘must have’ SaaS applications running on the public cloud. I’m talking about sales tracking tools, sites for desktop screen-sharing, and of course social media sites. The social networking sites are arguably the worst because they accept credentials from consumer-facing sites (e.g. Twitter, Google, Facebook, Hotmail) and therefore blur the distinction between your personal sites and company / enterprise usage. If you sign up to a work-related account using your personal email address, it can bring problems for you as an employee. With things like Microsoft accounts, where you can associate multiple email addresses with a single account, an employee who has joined a work email address with a personal address runs the risk of their former employer locking them out of their personal account by using their former email address to gain access.
Add to this the security problems caused when an ex-employee’s devices are hacked or stolen – along with the linked work accounts. An employee might alert the company to the problem, but would an ex-employee do the same?
Going back to my task-in-hand, there was no fear in this case of a ‘bad leaver’. It was just a chore trying to remember all the places we’d shared or granted access to accounts. We were so quick to sign up when we found a good application, but we kept no records because ever shutting down these accounts seemed a remote possibility.
It would seem from some survey stats out recently that many companies don’t even bother to try closing accounts. One survey found that 89% of ex-employees could still access very confidential information using their ‘old’ logins. This data is on sites such as Salesforce, Facebook, Google Apps, etc. It also found that 45% of these ex-employees did login at least once. That’s close to another stat I’ve seen where 51% of companies found that ex-employees tried to access company data.
IT departments would argue that part of the problem here is that nobody (apart from the users of course) knows these applications are in use. Staff create workspaces on the file sharing sites because it serves a pragmatic need during one busy period or another. The same solution is then re-used to store files that might be needed when access to the company network isn’t possible or convenient. That’s why a huge 68% admit to storing work information in their personal file-sharing cloud.
Another real possibility is that passwords for these applications are shared. There are various reasons for this, but chief among them is avoiding cost and maximizing simplicity. So, say five people have access and one leaves the company. The other four still need to carry on using the tool. Do they remember to change the password? Probably not.
It’s in the interests of SaaS vendors to make the sign-up process as easy as possible. But while I was struggling with the chore of closing down those accounts, my allegiance was definitely with those who warn us about the lack of security that this can bring.
Like many things, a little planning and record-keeping will help in the long run. Here are some suggestions to bear in mind:
The gist is that virus-infected Macros fell out of fashion due to security changes in Office, but now the target is the User rather than Office. The aim is to persuade the User that the document is more secure because the macro is present and to just click to enable the content.
The article (and the comments that follow) are mostly about random documents sent to you from somewhere out there on the Internet. Clicking to open those (let alone to enable macros) is rarely a good idea.
Inside an Enterprise, macros are used more frequently than the article needs to acknowledge. They’re used to add extra automation functionality to Word and Excel. In this case, the macro-enabled document is often from a known colleague and the enterprise web domain from whence it came is a trusted zone.
Typically, a layered security model would be used inside an enterprise to defend against this threat.
The first perimeter layer should be mail scanning – do you really need macro-enabled documents coming in? If not, block them from inbound mail.
The next layer should be that all the client PCs are up-to-date with anti-virus signatures. Check that your enterprise anti-virus solution is scanning Office documents. This catches cases where a document has come in from a USB stick or a file sharing service like DropBox.
Application level filtering such as setting the macro security to “Disable all macros except digitally signed macros” provides a final layer, but it has the disadvantage that signing isn’t well understood.
A way to improve security (not mentioned in the article) for behind-the-firewall macro-enabled usage is to generate and use a self-signed SSL security certificate. These are not so suitable for public websites, but are useful for internal sites and applications such as code signing (to confirm the software author and guarantee the code has not been altered). This is especially true if the organisation is large and there’s a chance the colleague sending the file is not known to the recipient.
Self-signed certificates can be created for free using a tool such as the OpenSSL toolkit, which can be used to generate an RSA Private Key and CSR (Certificate Signing Request) for Linux/Apache environments. In a Windows based environment, you can use a tool such as SelfCert.exe, or generate a code signing certificate using Microsoft Certificate Services.
In some implementations the end-user will still get a warning and have to accept the certificate. Some argue this can promote bad habits if end-users become blasé about accepting SSL certificates because “they were told to”. However, in the internal enterprise model we are addressing, the way around this is to pre-install the SSL certificate on every machine. That way, the trust question is never asked. A means to achieve this is for IT departments to push the certificate out as a trusted publisher to client PCs using group policies. Read this Microsoft technet article for more detail.
Sorry to state the obvious, but you receive a lot of emails and your number of unread messages only ever seems to go up.
It’s not just you. The statistics1 say you are one of around >3 billion worldwide active email accounts, and you are in line to receive your share of around 150 billion total worldwide emails per day. On average, corporate users send or receive around 120 emails a day – roughly 80 received and 40 sent. Other studies suggest that the average knowledge worker spends 14.5 hours per week reading and answering emails.
You can tweet and update your social media statuses as much as you like, but your email inbox will still contain the same number of messages. And, that will likely increase by around 5% in the next 12 months.
There is a lot of criticism of email along the lines that it shreds our attention for other tasks and kills our productivity and time management. Yet, the email client continues to be our business “command and control centre” where work is received and tasks are delegated.
Perhaps the better strategy is to improve email, not replace it.
Researchers identified the problems that people experience with task management in email as far back as 20032, but the changes required to solve these are only slowly appearing in email clients. For example:
But some problems with email should probably not be solved in the email client.
Email is at its best as a notification engine, and email used as file storage is not playing to its strengths (ask any Exchange Server administrator :-)). Other business applications are better at managing content. For example, using email attachments to forward documents for review is not a good idea. If anything changes that requires a new version of the document, the reviewers have to sort and search their email messages to make sure they respond to the correct version. Links in email messages to a document repository are a much better idea. Generating those email notifications as part of a document review workflow in the document control system is even better.
Another example is receiving emails (with or without attachments) from external sources that need to be shared with a wider team. An example might be a bid / tender process where the Sales Account Manager receives a set of documents that require a response or completion. Rather than forward these in email, better to store them directly in the document repository where they can be version controlled, reviewed, and edited until the content is final and approved.
To follow on from that with another example, at some point the approved documents need to be sent back to the sender. It is much better if that can be done by directly referencing the document in the repository (rather than the one saved to a hard drive or sent as an email attachment). It removes the opportunity for error.
One ‘problem’ for achieving this is that we use so many different email client applications to read our emails. At present, around 50% is done from Mobile devices; the rest from Desktop and Webmail (around 30% and 20% respectively). But in the business office environment the typical usage is desktop-based, with Microsoft Outlook as the most widely used email client.
This is very similar to the fact that the majority of documents that end up in a document repository are produced using the Microsoft Office desktop tools – Word, PowerPoint, and Excel. In a previous software release we dealt with that by providing an add-in for those applications. The aim was to encourage good practice (storing documents in a controlled manner) without having to leave the tools in which they were created.
The solution for better email management therefore is to extend our Microsoft Office Add-in to include support for Outlook.
There was an important shift with the introduction of Outlook 2007 which brought in a new UI and event interface. It’s different enough to make us decide not to support Outlook 2000, 2003 or Express. The Outlook add-in is compatible with Outlook 2007, 2010 and 2013 running under Microsoft Windows XP, Vista, 7 and 8.
We also had a constraint that our internal CogniDox API had to be extended to support email integration. We made the required changes in CogniDox 8.8.0, and so using that version (or later) is mandatory to make use of the latest Add-in version.
What the new Add-in provides for Outlook are features such as:
The Outlook Add-in appears as a sidebar in the same style as the Word, Excel and PowerPoint add-ins. One extra feature is support for drag and drop: for example select a message in Outlook and drag it onto either a category or a document title in the Browse View. It will then either create a new document or version, either as a draft or issue.
The new Add-in and User Guide are available as follows:
You will need a user account to access the support site. The software is free to download for existing customers.
1 Based on various reports from The Radicati Group Email Statistics Reports http://www.radicati.com/
2 Bellotti, V., Ducheneaut, N., Howard, M., Smith, I., Taking Email to Task: The Design and Evaluation of a Task Management Centred Email Tool, 2003 [PDF]
LinkedIn has opened its publishing platform called LinkedIn Publish to the rest of us that are not “Influencers”. You know you have this feature if there is a pencil icon in the “Share an update” field on your LinkedIn homepage. If you don’t, you can ask for access here: http://specialedition.linkedin.com/publishing/.
It’s been promoted as a way to publish “long form posts” (as opposed to the limited character length status update). Not exactly clear why this isn’t called a blog, but maybe it’s to avoid comparison with the other blogging platforms.
The social media commentators have been active in discussing it, and their advice on whether to use it seems to be: Why not? It’s another way to get engagement. And, it’s a more focussed and targeted audience then other platforms.
But it isn’t a ‘silver bullet’. You need a large number of connections or followers to be effective and your content needs to be read, liked, and shared to be promoted. There’s also the assumption that your connections are interested in what you have to say – many of us have a mixed bag when it comes to LinkedIn connections. When I joined (in 2004, according to my Account info) the main rationale was to stay in touch with former work colleagues. They are now doing all manner of things, and not necessarily interested in what I am doing today.
I have no insights into whether this or Facebook, Google+, or something else is the future of social media. So I did the obvious geeky thing and looked instead at the technology. The rich text editor they’re using is TinyMCE (the main alternative is CKEditor). It’s been themed in the LinkedIn style but otherwise looks like an ‘out of the box’ TinyMCE toolbar. You can do the expected things like embed images and other media, but you can’t use embedded HTML. That still means you can (for example) embed a video sharing code, so it may not be all that important to you. But you can use HTML in WordPress.
If you follow the advice I’ve seen on the web and use Microsoft Word to edit the post then directly copy/paste into TinyMCE, I think you will encounter formatting issues sooner rather than later.
One major difference / deficiency compared to WordPress is the lack of categories / tags that you can assign to a post. That will severely hamper search for your future readers when you’ve amassed a decent number of posts. If I understand correctly, tagging your content to suitable channels is something that LinkedIn Publish does by algorithm. You can’t control it.
Also, WordPress is more transparent when it comes to where your posts are stored. It’s my guess this post will be stored at one of the two LinkedIn data centres in either Virginia or Texas. But it’s under their control, not mine.
It raises two thoughts for me. The first is that I’d prefer to have my content stored in a document control repository (for version control, review, approval) and then upload it automatically to the LinkedIn Publish site. The second is that marketing folk will want to publish content to many sites (content syndication) and it might be a good feature for us to consider adding LinkedIn Publish to our existing WordPress publishing plug-in. One for the roadmap.
ISO 9001:2008 is not prescriptive – it provides a framework and good advice but generally leaves it up to the company to do what they consider best, and that includes adopting software tools or methodologies. There are, for example, only six documented procedures included as mandatory.
This isn’t going to change. The draft version of ISO 9001:2015 looks like it will merge documents and records under the term “documented information” and there will be no mandatory quality manual, procedures or quality records. That won’t mean there is no value in these documents, but rather there will be more flexibility in how it’s managed.
The problem with flexibility is that it can leave a newcomer to ISO 9001 in a confused state. Where do I start? What do I need to do? How do I know when we’re ready for audit? This is why ISO 9001 is often compared with other continuous improvement approaches such as Lean Six Sigma (LSS). Some have said that ISO 9001 provides the “what” and LSS provides the “how”. In truth, that’s an over-statement because tools associated with LSS tend to be problem-solving techniques rather than tools, and are not coordinated in any particular way.
There are blogs out there that can help with ISO 9001. One useful post this week came from The ISO 9001 Blog and provided seven tips to provide a documented procedure for controlling your documents.
Take time to read the blog in full, but the list of tips are:
These are very good tips, but it could be more prescriptive about exactly how to do this. There’s a mention that “this is often easier with electronic versions than with paper copies” but that advice stops far short in my opinion. There is a massive advantage in using an electronic DMS to implement these tips.
To rattle through a quick mapping of tips to CogniDox features, we would find that the ability to create workflows with mandatory approvers delivers #1. The review and notification process takes care of #2. Version history and the event log provides #3. A clear link to latest and approved-latest versions solves #4 (as does the ability to hide any version other than the approved-latest one). Tip #5 is supported by embedded metadata in the documents, so readers can see what they are using. We’d look to limited partner access and/or the extranet portal functionality for #6. Finally, tip #7 can be achieved by marking the document as obsolete.
The electronic DMS approach also allows you to add extra tips. For example, using a graphical and interactive version of a procedure (such as a flowchart) makes it far easier to use than a printed page. Using email notification links as an alternative to attachments is another example.
But the absolute stand-out argument in favour of electronic DMS is the ability it provides to integrate with other line-of business systems. The technology we use has a major influence on service innovation, and by linking (for example) the DMS with Help Desk systems we can provide better visibility where our customers are reporting difficulties and which document assets (including software, user guides, etc.) might be affected.
The acceleration in the generation of data (aka big data) puts even more pressure on quality compliance. Without systems to help, it may prove impossible to do otherwise.
This latest addition to our theme “projects that can change the way your company works” looks at the topic of blogging in small to medium companies with a B2B business model.
There is an ocean of words out there advising us that inbound marketing is the future and that the traditional sales funnel concept is obsolete. Now, it’s all about customer success management (CSM) and how you use your content strategy to guide the customer experience. Evidence seems to support this: a Hubspot study in 2013 showed that if you publish blogs daily (as opposed to once per month), the effect is 70% more organic search traffic and 12% more referral traffic to the website. Since the primary goal in an inbound marketing strategy is to attract visitors to your site, this sounds very appealing.
Regular blogging is key to search engine optimisation too. Anyone who has read even an introduction to SEO knows how important it is to keep web site ‘freshness’ and encourage good-quality back links from other websites. Again, one of the best ways to do that is to write new blog posts on a regular basis.
Sadly, just creating a blog page and then ignoring it has no positive effect on web traffic whatsoever.
This becomes a challenge to companies that are just not used to this style of marketing. I have in mind tech companies who are removed at least one level from the end-consumer product, and who traditionally got by on datasheets and maybe the occasional brochure. One basic problem is what to write about? There is good advice out there that may help, and a summary is not only to think of a blog as an opinion-piece, but also to consider other content types such as how-to posts, interviews, trade show reviews, top-ten lists, and so on.
Choosing the blogging platform appears to be the easy part. WordPress is the most popular and is in use at more than 60 million websites with over 44 million blog posts published each month. According to BuiltWith, WP has over 92% market share of high-traffic blog sites. Rivals to WP such as Blogger and Tumblr pale by comparison where usage is concerned.
But there are tactical problems when using a blogging platform in a typical business. Compared to the simple case of the single-author blog, the following issues are common:
The reality for many companies is that too few people (usually in Marketing) have more than their preferred share of responsibility for producing content and ensuring it follows the correct company message. They need help from colleagues to produce the flow of content and they need timely approvals from senior management so they can publish with confidence.
It would be a fantastic scenario for any blog editor to have a backlog of articles that are at various stages of review, and a simple approval workflow to mark articles as ready-to-go.
To facilitate this, we added features in CogniDox to help the internal management of blog posts and their publication to the WordPress platform.
CogniDox allows a blog post to be created in-house using tools such as Microsoft Word or the built-in online rich text editor, which you can then send to colleagues for review. When it is reviewed and ready, this is followed by approval. Once approved, a CogniDox plug-in allows the post to be published directly on a WordPress.com or WordPress.org site. The plug-in shows you how the post will appear on the WP site, and allows you to add categories and tags.
It could also be integrated into a Joomla-based website to appear alongside other web pages and tools, by using an open-source tool we’ve built called WordBridge.
Once published, you get the other benefits of WordPress – a vast array of themes and plugins that will enable you to extend your blogging functionality into areas such as adding social media buttons, photo galleries, mailing list forms, e-commerce or membership management.
If you would like more information about this and other CogniDox features, contact us for a demo.
Continuing the theme of projects that can change the way your company works, in this post we’ll look at the topic of Information Findability.
Information Findability is determined primarily by two factors:
The quality of the user interface could be a third factor, or you can see it as part of the information layout. What’s behind the first factor is the intuitiveness of navigation, or, more simply stated – how obvious is it that the information you seek will be here rather than there? The problem with information layout is that it is virtually impossible for one structure to suit all needs. Take a trivial example such as a PO to procure a piece of test equipment for a project. Where does the PO document belong? In Finance or in R&D? In the specific project folder? The correct answer is: in all of them.
Increasingly, modern document management systems are realising the value of the “virtual folder”. This presents a list of documents that is relevant to where the user is now in their navigation. The list is produced dynamically (from tags) and is most easily displayed with web-based technology where pages are commonly dynamic in nature anyway. This is what end-users expect from Web 2.0 systems and don’t get from network file shares. There is a trade-off here because users don’t want *too much* dynamism, and will expect repeatable results when they navigate to the same place. It has to be both familiar and contain the relevant documents. A case of “don’t make me think” in action.
The capability of the search engine is down to three attributes: Faster, Deeper, and Wider.
Let’s start with fastness. Search engines work by converting all file formats they can read into a single format called an index. It gathers together unstructured data from many diverse file formats, and provides a very rapid retrieval of search results compared to, for example, a lookup in an SQL database. The data stored is trimmed of common, stem, or stop words, to make it more efficient.
The process of creating the index is done by crawling the content, usually automated to start at a specified time or time intervals. It could also be event-driven i.e. add to index every time new content is added. The methods differ in the system resources required and the ‘right’ answer depends on how real-time the data needs to be in order to be valuable. Most document systems incrementally update the index every 15 minutes or so because that balances a reasonable number of documents being added against the performance impact of the indexing process.
Let’s move to deepness. A deep search is one where the most statistically relevant result is obtained. Imagine a system that only supported search by a limited data model (such as a fixed taxonomy or ontology). It’s unlikely to be useful. At a minimum, we should be looking for full-text search capabilities. It must not be the case that an end-user has to manually tag content with metadata in order for it to be found. If a document contains the relevant words, it should be found. Whether it is displayed to a user may depend on the user’s right to see the document (“security trimming”), but it should be found. Another important factor for depth is having flexibility in what files types can be indexed. The more file type formats that can be indexed, the better. It’s also essential to be able to add custom formats.
Finally, there is wideness. A wide search is one where you are not limited to a single source (or information silo) but instead content is indexed from a set of sources such as other content repositories, business systems, and network file shares. This is often called federated search. The value is fairly obvious, for example as well as retrieving product part numbers from one system, it may be beneficial to combine that with order information from an ERP, or fault reports from a Help Desk system. The problem with combining data from heterogeneous sources is that success depends on data formats and the feasibility of a search API. Every company has a different set of business systems, which doesn’t help either. Typically, your mileage with the wideness factor is related to the quality of the systems integration / search consultants that you use; and with the openness or otherwise of the tools that they use.
A software product such as CogniDox can significantly impact fastness and deepness, and have some impact on wideness.
One approach is to treat Search as an add-on, and integrate with leading proprietary software search platforms. But, leading proprietary products such as FAST, Autonomy, and Endeca have been acquired and merged into product lines, making the situation uncertain for their standalone customers. And, as the influence of these proprietary solutions diminished, the Apache Solr® open source solution has grown in strength as a powerful, scalable, cross-platform search engine (https://lucene.apache.org/solr/).
Therefore, CogniDox provides built-in search powered by the Solr engine. Solr has a rich set of features such as faceted search, full text search, rich document handling and dynamic clustering. Out-of-the-box, it provides indexed search for CogniDox documents (including full text search). The Apache Tika project, which is commonly used alongside Solr, has an extensive list of supported Document Formats (http://tika.apache.org/1.5/formats.html).
It basically provides us with fastness and deepness; and by virtue of the fact that it is open source, it leaves the way clear for any wideness initiative.
In this blog I want to start a new theme – projects that can change the way your company works.
All companies are unique to a degree, but there are many common issues that the majority are trying to solve. In the technology world we talk so much about features that it can be difficult to relate these back to the problems they are meant to address. I want to approach it from the other direction: what is the problem and how can software (CogniDox in particular) address that problem?
In the first of this series I’m going to consider a biggie: how do we make an efficient process for actually releasing a product? We read scholarly articles about innovation and the overall process / development methodologies that might help, but what about the mechanics of actually making a product release in the leanest manner possible?
If we succeed in this, we can say we have an efficient Product Release Engine.
Most high-tech companies have multiple products. Most products combine multiple project deliverables; from different hardware and software teams as well as technical writers and training. Most product releases are complex and require a specific configuration of elements to work reliably and properly.
You can have great teams using the best tools, but still suffer from information silos in your product development. The hardware team may have files created in AutoCAD or SolidWorks for CAD design. The software team may use Git or Perforce for the version control of software programs. The technical authors may use a DITA-compliant XML tool for the user guides. And so it continues across Training, Technical Support, and other groups.
There are two other common problems.
The first is the task of making a product release can be hard to pin on any one job role. It could be the Product Manager, but they may think their job is about managing user requirements, prioritising product features and building roadmaps. Project Managers may only be concerned about milestones and finishing on-time, rather than what happens after. It could be the Software team – after all they’ll likely have a software configuration tool in place and will be familiar with the language of branches, builds, and releases. But software is only one stream in the overall product, so this is necessary but not sufficient. The solution to this is to make this an explicit job title – the Release Manager. It doesn’t always have to be a full-time role or person, but it should be clear who is responsible. Their responsibility is to validate that all release components have been approved for release by the technical, product, and executive teams.
The second problem is that there is often a gap between the product deliverables and the entitlements of the customers receiving the product. Even if someone is responsible for the release, they lack the tools to help them manage a matrix of products and customers. Even if it’s managed using the ubiquitous spreadsheet, it still requires a manual step to decide before a customer receives anything.
So what can be done to link together the different teams that contribute to a product development and prevent ‘silos of information’ forming in the company?
CogniDox is a ‘silo-linker’ that solves this problem and gives the Release Manager a useful set of tools. Here’s how:
If you’d like to know more about the tools that CogniDox provides for Product Managers, feel free to contact us for more information or a demo http://www.cognidox.com/about-us/contact-us