Adobe DITA World 2017 – Day 1 Summary by Danielle M. Villegas


Adobe

Hello everyone! My name is Danielle M. Villegas, and I’m the “resident blogger” at Adobe DITA World 2017. In this blog post I will sum up the presentations of Day 1 of the conference.
There was a lot of informationon this first day of Adobe DITA World, but hopefully, I’ll be able to give you some of the highlights of each talk.
After Adobe TechComm Evangelist Stefan Gentz and Adobe Solutions Consulting Manager Dustin Vaughn opened up the virtual conference room, things started quickly. We were told that last year, +1,400 attendees signed up for the event. This year, Adobe DITA World got +2,500 registrations worldwide. That’s a lot of people attending!
The conference started off with a short welcome note from Adobe President and CEO, Shantanu Narayen. His main message was that our devices enable us to do so much more and in a personalized way, and we are the creators! He emphasized that this week, we’ll be hearing from experts who will help us to create, manage, and deliver world-class experiences for the best customer experiences. Adobe provides all the tools to make this happen!

In this post:

[Keynote] Scott Abel: “The Cognitive Era and the Future of Content”
Juhee Garg: “Technical content as part of your Marketing Strategy”
Philipp Baur: “The Triple C of Good DITA”
Ulrike Parson: “Bringing together what belongs together: DITA as the glue between content silos”
Tom Aldous: “Using DITAMAP / FrameMaker for non-DITA content”
Sarah O’Keefe: “Content – Is it really a business asset?”
Robert Anderson: “What Is DITA Open Toolkit, and What Should FrameMaker Authors Know About It?”

Keynote from Scott Abel: “The Cognitive Era and the Future of Content”

Scott Abel is the CEO of “The Content Wrangler” company, which is the official media partner of Adobe DITA World 2017. Scott is always a dynamic speaker!
The main focus of Scott’s talk was centered around how the future of technical communications will be about creating content that does things FOR our customers by producing on machine-ready content, as content is a business asset!
Scott started his talk talking about obesity and provided some stats about that. As someone who is watching his own health, he used the business of his nutritionist, Manuel, as an example to explain how Manuel needed to create better capabilities in his content. Manuel hired Scott after Manuel helped Scott reach one of his health goals (a satisfied customer!). Manuel needed to publish his content to multiple channels, but lacked some capabilities like personalized content. His content was created to be read by humans, but not computers. As a result, this prevented the automatic interchange between systems. This problem could be fixed through single-source publishing, adopting a unified content strategy, creating intelligent content, or even adopting DITA for topic-based content. However, it might not be enough to beat the competition. A differentiator was needed, but right now, Manuel’s not able to be scalable. Patients want exceptional experiences – we make them search for what they need. As content creators, we need to focus on how we deliver those exceptional experiences. Customers don’t want to learn your jargon or search for things; they don’t want to do the work that should’ve been already done for them to get to what they want.
This is where Scott cognitive computing comes into play. Cognitive computing involves self-learning systems that learn at scale and can make reason with purpose from the data. They interact with humans naturally with natural language processing. It’s a collection of different applications. Manuel could use cognitive computing to collect various preferences and habits, as well as family and other health history data, combine it with customer personal data and public data to create a personalized content experience for his customers.
What if he could connect his services to others offering similar services? Scott presented the idea that personal service managed using content management can yield an exceptional customer experience.
What if you could do the same thing? Scott suggested that it takes at least five steps to go in this direction:

You must have a willingness to explore, not always have ROI in mind,

You will need a disruptive mindset,

You will need intrapreneurial thinking – be a risk taker,

You will need top-level leadership support, and

You will need to have the resources, time and budget.

While cognitive content is the future, it’s not as close as we’d like to think. Depending on whom you ask, artificial intelligence (AI) is estimated to be used in full practice somewhere in the next 28-75 years from now! Cognitive content relies on AI, which was originally derived from science fiction ideas.
There are three main types of AI, as Scott explained:

Strong AI – This is AI like in the movie “Her,” where the AI had god-like intelligence

Full AI – This would be more generalized AI, set to perform intellectual tasks, like HAL in the movie, “2001: A Space Odyssey” performing a Turing Test.

Narrow AI – This is what we have now, also known as Weak AI. Example of weak AI would include digital assistants like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, or Google Home. These all require machine ready content, as they are mostly chatbots. We provide commands, and the chatbots provide answers within their programmed scope.

We’re stuck in the assistive past utilizing assistive solutions. We need to move towards acting on behalf of our users to help them achieve their goal, which means we need an agentive solution that works like personal valets. We are starting to move in that direction, but we’re not quite there yet. For example, a narrow AI agent would be Roomba or a Nest Thermostat, in which the AI in each of these devices learns your behavior. Information awareness plus machines doing the work equals an agentive action like Google alerts.
How do you decide between assistive versus agentive solutions?
Agentive solutions are delegated, measurable, have user focus, and have user input. Otherwise, it’s for assistance or automatic. They are vigilant and don’t need reminders or suffer from decreasing expertise. They are attentive to details, don’t look for workarounds, and are built for speed. Assistive solutions don’t employ these features.
Scott warned that the perceived dangers using AI are

“AI Washing,” which is basically marketing mumbo-jumbo,

AI will create autonomous weapons used to kill us, and

Robots will replace us.

Scott concluded that there are many types of niche content professionals that will be needed moving forward. Technical communicators are important in the content equation! He recommended that we can learn more with a book he recommends, Designing Agentive Technology by Christopher Noessel. He also invited us all to attend the conference he runs, Information Development World, which is set to be a great conference about preparing for chatbots and other cognitive computing, which takes place on November 28th – 30th, 2017.

Juhee Garg: “Technical content as part of your Marketing Strategy”

Juhee Garg works for Adobe regarding the XML Documentation Add-on for Adobe Experience Manager (AEM), Adobe’s enterprise-class DITA CCMS.
Juhee started talking out by talking about the digital evolution, whereby the behavior of buyers is changing because they can learn a lot of information just through a click of a button. Buyers are forming opinions based on digital searches now. Business buyers don’t contact suppliers until 57% of the purchase process is complete.
A typical buyer research process might be something where the user starts at the product website, then proceeds to investigate white papers, product manuals, how-to videos, user guides, and case studies, then looking at a competitive comparison before finally looking at an admin guide. Buyers are now looking between marketing content and technical content, as it is all product information. Boundaries are blurring between these kinds of content, yet the technical content is not usually part of marketing strategy because it’s considered a cost center and lacks IT support. A better alignment of these kinds of content is needed. However, it’s hard to do when ecosystems are creating different content. System integration is an IT nightmare. It can be hard to coordinate tech content with web CMS/Marketing content, difficult to keep templates in sync, keep content integrity, push updates; shared content can get duplicated, and it’s difficult to maintain multiple systems.
How do we break down the silos? We can bring the appropriate tools, and bring the two content creation groups together on a common platform and content model that could go out to the users. The advantages of this approach would be a unified content strategy, a consistent user experience, shared and reused content, resulting in effective content and communication.
The XML Documentation add-on for AEM is a tool that provides that link between authoring and collaboration. Authoring and collaboration using DITA content directly on AEM can be done, providing end-to-end content management capabilities and multi-channel publishing.
The benefits are that it produces blended publishing, and it allows you to inject technical communication content based on DITA directly into AEM through mixing Marketing and Technical Communication on one website.
Juhee gave us a demo of how this works in AEM directly. The add-on tool provides a WYSYWIG-friendly editor that allows someone who is not familiar with writing in DITA to write and edit in AEM in a DITA-friendly way. There is still a source view as well, so you can see all the XML tags and tweak as needed if you are a tech writer. All DITA features are supported by this editor. The publishing model is also very user-friendly, and easy to move elements around in the structure to change taxonomy as needed. DITA can be published as an AEM site. You can reuse templates from the marketing site if needed. It’s easy to publish, as you can publish content as a website, a PDF, HTML5, an EPub, and other advanced features. Pagewise PDF is a special output feature to create a PDF of each AEM page in the site. Much of the editing of a website in AEM is “drag and drop” of components/widgets, which looked very easy to do! Through the demo, Juhee was able to show how marketing and tech comm can align easily using these tools, and how it worked when it was published. The add-on can be specified on whichever version of DITA you are using, as well as DITA specialization. AEM integrates well with Adobe Marketing Cloud or Adobe Target so that you can see analytics as well. The new 2.5 release is expected next week showing these features and new ones as well!

Philipp Baur: “The Triple C of Good DITA”

Philipp Baur is from Congree Language Technologies, a 30-year-old company based in Germany which focuses on software and services for author assistance, serving about 90 customers. Congree Authoring Server software is an authoring server which checks spelling, grammar, style according to company standards, terminology according to the term database, abbreviation use, looks up similar sentences, looks up terminology information, and stores new content for everyone to use automatically and in real-time as you are using it. It is directly integrated into the editor you are using and can be used company-wide for consistency.
Philipp started his talk reviewing talking about topic-oriented documentation and DITA. He started with a definition of a topic, which he defined as

Independent information carrier

Contains enough information to be viable by itself

Answers a specific & unique question

Can be combined freely with other topics

Not created for specific documents but for the entire company

Why would we write this way?

Topics make content more manageable

Several authors write on the same document

Makes proofreading and translation more flexible

Saves money by reusing content

Modern devices require optimal space management

Single point of truth

Easy to apply thanks to standards like DITA

DITA offers a predefined structured for topics, and with the help of metadata topics, they can serve different target groups, products, and purposes.
The Triple Cs of good DITA was defined as cohesion, consistency, and coherence.
Cohesion:

It’s the glue between two sentences.

It’s Necessary for the reader to link two sentences.

Examples would be words like and, so, yet, etc. or pronouns like this, some, or it.

Unnecessary or wrong use of cohesion undermines the purpose of topics.

Example: I like my cat. The cat would kill me if she could.
Change to: I like my cat. But she would kill me if she could.
Coherence:

Ensures that content has some sort of inner connection.

Avoids contradictions.

Avoids confusion.

Incomplete topics increase the risk of confusion.

Example: My cat is not for sale. Contact me if you want to buy my cat.

Consistency:

The invisible thread accompanying the reader through your documentation.

It can be split into language consistency, style consistency, and content consistency.

Language consistency – British vs. American English and spelling, etc.

Style consistency – how the user is addressed; tone of voice; use of passive voice; level of politeness; sentence complexity; use of modal verbs

Content consistency – identical sentences for identical ideas; using the same word for the same concept; violations are problematic for translation and for the reader

Hard to achieve

Inconsistencies throw off the reader, interrupt concentration and can lead to misunderstandings

How Congree can help

Users can use Congree in conjunction with FrameMaker.

Philipp gave a demo, which showed that Congree can display violations of what needs correcting in a FrameMaker document so that it can be fixed for consistency. You can click on each violation to make changes as needed, and it provides the style guide integrated into Congree

You can learn more about Congree on their website.
You can contact Philipp at pbaur@congree.com, or at info@congree.com. And check out the Congree channel on YouTube! If you are interested in seeing a personal demo to see if this is an appropriate product for you, email Philipp!

Ulrike Parson: “Bringing together what belongs together: DITA as the glue between content silos”

Ulrike Parson is also from Germany and owns Parson AG. She presented a case study based on work she did with a semiconductor company that showed how she and her colleague broke down the content silos of her client using DITA as the glue.
The challenges they faced:

They had to look at customer facing and developer technical documentation.

Documents were created by different groups.

Information was created in different life cycle phases.

There was a diversity of tools for authoring, content management, and publication.

Reuse across lifecycle phases and systems were done mostly by copy and paste.

There was a high effort for changing information and keeping it consistent.

It was impossible to estimate the consequences of change.

The goals set by the team were to do a lot of workshops and meetings to figure out definitions of measures, connect the content silos, make information consistent and reusable across systems, bring together information products and company groups, and make relations between artifacts from different domains visible. This would be done by creating a requirement, then providing a test case, then developing the code or device, which would yield the documentation.
The solution to connect silos involved defining the requirements and engineering domains. They found that semantic middleware was needed that had properties to connect all the groups to each other. These connectors were necessary to import objects and relations. The output decided was DITA for content, and metadata for better control of all the elements imported. Instead of using one tool for everything, this allowed teams to keep each tool’s purpose, and full features would only be required for its original purpose. It allowed for different updates and release cycles, as well as hard to change functioning workflows. It used the existing IT infrastructure and focused on reuse and consistency requirements.
Documentation 3.0 was essential to making this work. It included:

Inputs from original silos feed, made up of:

Requirements, systems, formats, exports, etc. , format export in DITA

Test Cases – original system, export format, test specifications produced in DITA

Source Code – original system, export tool, code comments, all in DITA

Technical parameters – define characteristics, parameters, features, values – DITA subject schema maps and DITAVAL

Documentation – DITA-XML based files, managed, subject scheme maps for variants, DITA keymaps for configurable data

It would all feed into the middleware, which would include:

A semantic model for intelligent information,

Some products like IBM Watson, or graft databases products,

Components that would act as the glue; the components were those used in products

Import-Export interfaces

Used established standards, DITA, RDF, Req-IF

Use established Interfaces REST

No interface? Use a standard exchange format. Start with one direction only

Reusable DITA modules

Use of DITA

Use centralized framework, templates and subject scheme maps

Use intelligent referencing mechanisms and configurable data as keys

If no CMS, find a way to trace use of modules

Consider how much of the semantics to transport to DITA

Documentation formed by combining generated and authored text. They found that single-sourcing of documents and the single-sourcing of variants for documents for publication on the website, internal use, and certifications were optimal. Documents could automatically be published on a build server, while monitoring generated DITA modules for changes in the original system on a company-wide DITA framework.
They had less success building a dashboard in the web portal. They found that it was not as successful as hoped. There were issues with traceability of modules from source to documentation, and problems with coverage analysis and metrics from relations such as between requirements and test cases. Despite creating a central access point all information for development projects, it was hard for workers to migrate as they were used to their old ways.
Lessons learned from this experience were the following:

Reuse of information must be based on a solid and scalable metadata model.

Use of standards makes your solution future proof.

DITA provides a good basis for intelligent content.

Creating integrated information for reuse requires a corporate effort.

Integrated information requires new processes.

Migration can be a huge effort.

“More authors” means “more training.”

Ulrike considered what it would take to work towards Information 4.0, but said, for now, it’s better to stick to Information 3.0 because:

Intelligent content is more than reusing information.

Intelligent content is modular, machine-readable content, enriched and delivered with metadata for enhanced usage.

DITA is the perfect basis for intelligent content as it supports modularization and metadata

Standards for metadata for technical communication are emerging (like iiRDS).

Technical communicators will become content and metadata curators.

You can contact Ulrike at ulrike.parson@parson-europe.com

Tom Aldous: “Using DITAMAP / FrameMaker for non-DITA content”

Thomas Aldous has been in the technical communications industry for 30 years, including stints at InTech, Adobe, Acrolinx, and now consulting as The Content Era.
The goal of this session was to provide solutions for those who had non-DITA XML content in a non-FrameMaker application, but would like to change authoring and publishing environments, those who were currently authoring non-DITA XML Structured content but would like to slowly migrate from current structured or unstructured content to DITA, and those would like to manage all content in DITA XML structure and publish to output like a complete website, HTML5, PDF, mobile app, or help.
Tom was going a little fast for me to keep up with him, but this is what I was able to glean:
A DITAMAP can let you organize topics that you want to publish. You can also generate navigation files based on the map structure and generate links that get added to the topics.
A map file references one or more of any XML file using <topicref> elements. The <topicref> elements can be nested to reflect the desired hierarchical relationship of the topics.
Why does it matter? FrameMaker supports DITA, including DITAMAPs, even if the content is structured in a none-DITA structure, and can be configured for most structures.
Tom called FrameMaker the “monkey wrench” of structured publishing, as it can handle just about anything related to DITA.
XML content comes in several “flavors”:

1 long file

1 small file map with pointers to file in the order they should be published. Entity reference do not normally have DTD callouts

1 small file map with a pointer to a file in the order they should be published. With Arbortext and others, use DTD callouts

DITA-DITAMAP and BOOK MAP have pointers to topic files in the order of publish

If starting with one long XML file (the example he used had over 6,000 lines in it), the long XML File could be converted into a DITAMAP, whereby it was cut up into chunks of content using some scripting, then mapped.
Tom noted that there are lots of examples of custom XML structures and other standards and that you don’t have to move completely to DITA, but you can also create an XSL stylesheet used to transform your current XML into the DITA structured.
Tom proceeded with a demo, which he started by opening the long XML file, which showed that you could bring in the DTD, name your application, create a template, read/write rules, namespace, and define doctypes, and also support entity locations.
By using an ExtendScript utility that The Content Era created that can chunk the files, he was able to create the DITAMAP as well. The XML view configures content in any way you want, showing that the ExtendScript will merge all the chunks seamlessly.
The way he did this within FrameMaker was to access from the top navigation Structure > Structured Application Designer. You would load up an existing application, then add all the details in the pop-up screen. Tom warned that rules are the most difficult and powerful, but it’s easily editable now in FrameMaker, as you can add the template, add doctypes, etc.
His advice was that you should understand your own domain content – make it intuitive, and create solutions for your content.
Tom likes complex challenges, so contact him if you are really stuck! He reminded us that XML is 16 years old now, so it’s a strong standard.
You can contact Tom through his company website, LinkedIn, or on Twitter. He’ll also be seen next at an Adobe event the day before the start at LavaCon 2017.

Sarah O’Keefe: “Content – Is it really a business asset?”

Sarah O’Keefe from Scriptorium Publishing contends that content is a business asset, especially if it’s good content. It means that people don’t return products or call customer service. Quoting Tim O’Reilly, “Technical information changes the world by spreading the knowledge of innovators.”
How is content an asset?

Meets regulatory requirements

Enables customer to use a product successfully

Provides reference information to prospective buyer doing research

Support brand message

When assets go wrong, it can be due to a number of reasons. They can include:

Product is recalled because of incorrect content

Frustrated customers return products – 25 % of returns are due to bad instructions

Prospective buyers don’t find what they need

It contradicts the branding

Information is out of date

Bad execution

The content is not appropriate for audience

How do you determine if your content an asset or liability? It needs to meet a hierarchy of content needs:
The minimum amount of viable content are the Available, Accurate, and Appropriate levels shown. If these aren’t met, then these are liabilities. Content that is Connected and Intelligent is an asset.
The customer journey now has to be looked at holistically. Content types are converging – we used to have a marketing funnel, but now we have a circular process. In marketing funnel, you matter until you buy, then you don’t matter. It’s the battle between pre-sale documents versus post-sales documents and persuasive information versus product information. In customer journey, we care about you in every step – you matter through the whole process. Convergence happens when using all the different documentation.
Sarah gave an example by telling a story about the disconnection between the website and the instructions included in the box of a product she bought. She emphasized that, unfortunately, you can’t control content use in the customer journey.
The Internet of things (IoT) and connected enterprise pulls in many of these concepts in which content is a huge asset. In the connected home, you can communicate with devices in your smart home by getting information and the device performing actions. The connected enterprise is the connected factory, such as industry 4.0, robotics, and automation, with concerns related to security.
IoT devices require intelligent content that islocation-aware, time aware, context-aware, system context-aware, and provide context-sensitive help. This can be achieved by improving search, such as the searchability (information is exposed to a search engine), findability (information shows up when people search for it, performs well with certain keywords, etc.), and discoverability (other people create links to your content, others recommend your content, reputation matters). Your reputation affects content distribution!
Digital business transformation occurs through good data hygiene. The ways to achieve this include:

No more back-formation of data

Single source of truth

Content is derived from data

Content is not data storage

For example, a product gets made, then technical publications capture information. Then product specifications change. But instead, corrections aren’t being made at the source. The document is now the source of truth, which is not an appropriate role for tech pubs. Content Management 1.0 needed namely traceability (where did content come from), content usable in various forms, distribution, and localization workflows (reduce reuse recycle). Localization is very important is this process.
Sarah concluded by saying that good content is an asset if you are following content trends by going beyond technical accuracies.
Sarah has written a white paper on the topic called, The Age of Accountability: Unifying Marketing and Technical Content with Adobe Experience Manager which you can access for more information. Technical documentation is all about scalability. Sarah concluded that content needs to be useful and consistent to the customer at an affordable rate.

Robert Anderson: “What Is DITA Open Toolkit, and What Should FrameMaker Authors Know About It?”

Robert D. Anderson from IBM has been working on DITA-OT almost since its inception.
What is DIT Open Toolkit?

Open Source software

It’s a program (technically a collection of programs) intended to read DITA and produce something else

It’s not part of DITA, but it’s there to make your DITA do something

DITA-OT is software that turns your stuff into something else (that’s not usually DITA)

It’s an implementation of DITA

Originally a developer works project at IBM

DITA-OT became open source when DITA became an open standard

Without tools who would use DITA? If it’s not a shared standard, who would want DITA- OT?

DITA-OT was created to help all DITA users off the ground more easily, including authors and vendors trying to support DITA.

DITA-OT core features:

Key resolution

Content references

Link and metadata management

Filtering

Branch filtering, and more

It also includes pre-processing steps like merging DITAVAL conditions, merging maps, retrieving link text, evaluating @copyto, adding ditaval flags, and more

How do I pre-process? You don’t – usually, it’s for those who want to super-customize things

From core to publish:

Ships formats out of the box: HTML5 PDF, XHTML, Eclipse Help, CHM, Troff and a few others (RTF, ODT, Java Help). Some are add-ons that are not maintained anymore

Plugins available for other formats

Styles are generic and meant for customization. Check out Jarno’s PDF generator to create a custom PDF.
More exciting stuff that DITA-OT can do:

Add preprocessing steps

Add or modify generated text

Custom HTML5 navigation

Switch or extend CSS

Use XSLT to override styles

Create entirely new output formats

Extensions usually stored in a plugin as with PDF plugin generator

FrameMaker does not use DITA-OT, as it can publish in PDF, which is not DITA-OT. The more complicated you get, more than you use the toolkit.
Should you care about toolkit updates?

If you’ve decided to use an open standard, you or your tools or any partners using DITA OT, or want the benefit of common, shared Open Source, then yes! Update!

When working with business partners who use custom HTML5 framework, or use an elaborate PDF style w/custom plugins, need to publish as Morse Cole, or as XML input into an automated system, then you need DITA OT

Updates are like:

Common preprocess fixes

Changes to how final rendered content is generated for all

Who governs DITA-OT?

Active participants – anybody can participate, the more you participate, the more influence you have

Most are from language, communication, and comp science backgrounds

With great open source, comes great responsibility.

Most are volunteers or report to their own managers

If anyone CAN fix a bug or add a feature … then sometimes you have to add it on your own

Useful skills to have to use DITA-OT:

The best way to suggest changes?

GitHub pull request

GitHub issue tracker

Attend contributor calls

Ask your DITA vendor

Resources that Robert provided:

Day 1 Conclusions
The day concluded with Stefan and Dustin thanking today’s presenters, and inviting everyone to return for tomorrow’s presentations.
See you tomorrow on Day 2 of Adobe DITA World 2017!