Adobe DITA World 2017 – Day 2 Summary by Danielle M. Villegas


Hello again, everyone! Your Adobe DITA World 2017 “resident blogger” Danielle M. Villegas is here again. I hope you liked yesterday’s summaries. There was a lot going on yesterday, and still a lot today! After some technical difficulties on my end today, and seemingly some fixed audio issues on the call today, we got started with a quick round call of “Where in the world?” asking participants to chime in the Chat Pod in Adobe Connect where they were connecting from. Participants hailed from Asia, Europe, North and South America! Stefan and Dustin told us that the registrations for DITA World also went up since the registration for today and tomorrow were still open. DITA World now has over 2,500 registrants which makes it the second biggest Technical Communication Conference in the world (only the tekom / tcworld conference is bigger)!
The day was actually filled with a lot of talk in the Chat Pod, not only with questions from participants and speakers answering them after their presentation time was up but just fun chatter along the way. (Tech comm people have a great sense of humor, for sure!)
Again, Adobe TechComm Evangelist Stefan Gentz and Adobe Solutions Consulting Manager Dustin Vaughn opened the room and welcomed the audience. At 9 am Pacific Time we started with our Day 2 keynote speaker, Rahel Bailie!
I wrote as many notes as I could keep up with from the presentations, as again, we were all hit with a tsunami of information that it could be overwhelming at times! I’ve tried to hit the highlights below.

In this post:

[Keynote] Rahel Anne Bailie: “Out of the Guest Room into the Master Suite: Committing to a Relationship with Structured Content””
Noz Urbina: “AI Chatbots – Just another channel?”
Kristen James-Eberlein, “The State of DITA: Ten Years and More”
Markus Wiedenmaier: “Set your content free! Work with any Format as DITA in FrameMaker 2017!”
Magda Caloian: “Turn the right keys and content falls into place”
Christian Weih: “Going global with your content: With DITA and the Across Language Server”
Andrea L. Ames: “Structured Content … Sexy? Strategic? Or Both?”

Keynote from Rahel Anne Bailie: “Out of the Guest Room into the Master Suite: Committing to a Relationship with Structured Content”

Rahel Anne Bailie, Content Architect at Scroll LLP, kicked off the day with the keynote “Out of the Guest Room into the Master Suite: Committing to a Relationship with Structured Content.” She emphasized that structured content is no longer a luxury. It’s becoming a necessity!
She continued by giving an overview of what structure is. She felt that the best definition was that it’s an arrangement of, and relations between, the parts or elements of something complex. She pointed out that normal HTML tagging doesn’t specify enough for machines to read it, but by using DITA tagging, it becomes more machine-readable. She also pointed out that semantic content is intelligent content, and recommended that participants check the book, Intelligent Content in The Content Wrangler series for more information.
Rahel discussed the benefits of Intelligent Content as being the following:

Discoverable – In practical terms, content is automatically discoverable, which means that search engines can find your content because it understands not only the text, but also the intent of your meaning.
Re-usable – Content can be reused across topics by using content components that can be used in “mix or match” ways to build new topics or output to different formats.
Reconfigurable – Content can be repurposed in different contexts in a way that content can be automatically sorted or filtered, as well as included or excluded in different ways.
Adaptable – Content can adapt to specific contexts by being served up in different ways, depending on the device, audience, market, or other personalization variables.

The big question is, though–are you using content to promote your brand? It should be, as it should be constant across your brand. Content is the is the front door to your digital presence, how consumers understand your brand, provides critical aspects of acquisition, paves the way to customer retention, and helps with renewal.
In fact, the cost of acquisition is five times that of the cost put towards customer retention! We should be working to keep existing customers happy. 89% of companies see customer experience as a key factor in driving customer loyalty and retention. Customers want support, not bling, want to be able to use the product, get support no matter want. They want it to work! The problem is, 80% of companies think they deliver a superior customer experience, but only 8% of their customers agree.
Why should we make a commitment to structured content? Structured content shouldn’t treat like a temporary fix. We need to articulate the “why” to the budget drivers and explain that it supports many of the major business drivers.
Those drivers include the following:

Brand loyalty – Getting people to buy product or service and stay. Brand loyalty equals customer trust. Commonalities among successful brands include uniqueness, consistency, accuracy, relatability, and trustworthiness. Within those traits, structured content helps focuses on accuracy, consistency, and relatability.
Market Expansion – Markets need to keep expanding next door or around the world, keeping in mind that there are many audience variants.
Risk management – Digital evidence shows what works and what doesn’t.
Customer Retention – 80 % of future profits come from 20 % of existing customers. One of the top reasons for customers switching providers has to do with poor content communication.
Increase in capacity – This isn’t a topic often discussed. Delivery is the easiest expense, and the costs go up with each level up of complexity.

So how can you automate content to add value? Technology, governance, people, and uses all play a part. When creating high-performance teams, high-performance tools need to be provided to them as well, such as CCMS, XML editors, or products like FrameMaker.
Structure makes content work better!

Noz Urbina: “AI Chatbots – Just another channel?”

This presentation definitely stirred up a lot of conversation and took some of yesterday’s discussions about chatbots to the next level.
Noz Urbina, CEO at Urbina Consulting, started his presentation asking us, “How are we going to fit this new channel in our ecosystem of already too many channels?” His answer was that his consulting firm’s mission states the best approach to this: “We help organizations have the kind of relationships with people that people have with each other.”
The trick is that we don’t want this new chat thing to become too disjointed because a bad initial experience will yield less adaption.
Noz provided a LOT of concepts, types, and terms for much of his talk, which I’ll present most of my notes, but many of his slides explained things better.
The term “conversational interfaces” is an umbrella term for chatbots, personal assistants, voice-controlled user interfaces, as well as hybrids. The hybrids can be modalities. They connect humans to machines (like Amazon Echo’s Alexa and Microsoft’s Cortana). When a user asks or enters, “What’s the temperature?”-> Bot says, “The temperature right now in Valencia is 23” → bot shows: a chart with weather info. The Intent (get weather) is within the Grammar of known commands. The Parameter slot is the city, and the Slot type is the city entity.
Between the request and the response, the bot either recognizes a specific command from predefined grammar or uses natural language processing (NLP) to parse the input to determine the intent. A Grammar can have alternative phrasing, such as in the case of requesting the weather forecast, the “get weather” intent could accept both “what is the weather like” and “what’s the temperature” without NLP. When a bot is unable to answer within its current modality and is forced to make the user switch, then this is a “cliff”. If the bot is hitting too many cliffs, the protocol needs to hand over the request to a person.
When talking about AI (artificial intelligence), we need to talk about how the information is stored and presented. There is machine learning, which is when NLP and algorithms allow an AI to be trained and improve in a semi-guided way. One technique includes the decision learning tree. The decision learning tree is when all interactions are scripted and must fall into certain sequence. The logical equivalent of this is a touch-tone phone interface, but chatbots are driven by words instead of button tones on the phone. The system doesn’t learn the information; it’s just recoded.
The takeaways from the terminology are that chatbot responses are based on their ability to recognize specific, supported intents. Using NLP allows you to go beyond a limited grammar and accept to wider range of questions. Decision trees are a simple way to get started. Every task in your user task analysis could be an intent within your grammar.
To make the connection between chatbots and the content, typed chunks are ideal!
In one example he showed, the DITA elements <concept> and <shortdesc> are used, with content and intent broken down within these. It’s a case in which the labelling is essential. Markup can apply to multiple channels, and if tagged properly, can work anywhere. Noz tried it on his Google chatbots with an example of something he wrote, and it worked! Structured content can be simplified to be used as content that could be understand better for chatbots. More markup created better, more responsive answers. However, just because you are doing DITA, it doesn’t mean you are chatbot ready!
Pure structural markup is just the shape of the container for the content. The information type makes up the containers that describe universal base types of information. It can be highly semantic, which can translate into explicit meaning in the markup. The trick is to use the right mix of markup!
Chatbots will increase the need for specialization or judicious use of attributes added to our markup. You can be DITA specific, but if you’re not already carefully using short description and even abstracts. The good news is that if you are using DITA, you’re probably much closer than you were before DITA. The bad news is that specializing is expensive and has risks.
The architectures and implementation on this is based mostly of the platform, and can consist of many moving parts.
Technical communicators can prepare themselves for this kind of content by doing the following tasks:

Do a top task analysis. Build a comprehensive list of user tasks, and choose which you’ll support with bots.

Follow with customer journey mapping.
Map complex, multi-touchpoint experience to content, align across silos around customer experience, and derive actionable insights and content standard improvements.
Do a stage by stage analysis of what questions user may ask when achieving objectives.

Build or refine taxonomies

The more you can detect about user context (the sum total of all the user’s situation al parameters), the faster you’ll be able to target the right answers.
Include domain modelling.
Be sure to include knowledge graphing. Knowledge graphs support cognitive reasoning systems making it easier for bots to find the right content. (Google for OWL2 or SKOS standards for more information.)
Improve your structures (the markup spoken of earlier).

General implementation tips:

A textual chatbot should consider voice restrictions, and then be expanded as needed.

All reusable content style guidelines apply. This includes:

Voice, style and terminology. Consistency is vital!
No positional language (“see below,” etc.)
Avoid soft dependencies (“As read in chapter 2,” etc.)

All usual innovation rules apply. Set realistic expectations, then meet them. Test first internally, then in a friendly, small group of external users. If all goes well, then test widely.

So, you can see, chatbots should be just another channel! The problem is most organizations won’t have content properly written for them. DITA is much closer for writing the content, but most won’t have used it to full potential. Basic decision trees with every task as an intent aren’t scalable, but we need to be able to publish knowledge maps to chatbots systems, so it’s a good place to start.
Noz answered a LOT of questions in the chat room after his chat as the next presenter was setting up, and the chat was pretty busy during his talk, so this is a very hot topic!

Kristen James-Eberlein: “The State of DITA: Ten Years and More”

Kristen James-Eberlein, Chair of the OASIS DITA Technical Committee, is a consultant in the field. She has been involved since 2008, and participating by co-editing DITA 1.2 and 1.3.
Who develops DITA? OASIS – the Organization for the Advancement of Structured Information Standards – does! It’s a non-profit organization that drives the development convergence and adoption of open standards for the global information society.
Things to know about DITA and OASIS:

DITA is an open standard. Open standards enable interoperability. With DITA, interoperability it can work with different applications and companies.
There are eight consultants who represent OASIS to develop the standards.
DITA started in the 1990s at IBM. In March 2001, DITA was made available on IBM for public use on DeveloperWorks.
The OASIS committee formed in the spring of 2004; IBM donated the DTDs and docs, but DITA-OT went elsewhere (See our talk with Robert from Day 1.)

Kristen went through the history of the DITA releases, so we could see when certain features came out.
In DITA 1.3, this broke with tradition, as the protocol was released in 3 packages: a base edition, a technical content edition, and an all-inclusive edition for learning and training purposes. These releases were a result of the growth of DITA and emergence of new audiences. They focused on topic and map as the CORE doc types. The desire was to provide users with targeted packages that better contain what they need – no more, no less. As of October 2016, DITA 1.3 Errata 01 was released, with Errata 02 scheduled for later this year.
Kristen feels that the future of DITA lies between the offering of Lightweight DITA, and planning for DITA 2.0.
Why would we want to use Lightweight DITA? We’d want to use it due to the ease of DITA adoption and implementation. Full DITA is much more than many need, especially in new markets such as marketing and medical information. We need to be able to map to content in other forms, such as HTML5, Markdown, or whatever will arrive in future. Adoption of this lightweight version will foster growth of low-cost tools and applications.
Design will be essential, as it will include fewer info types, namely topics and maps only. It will include a smaller element set, a stricter content model, a subset of reuse mechanisms, and a new multimedia domain. Between DITA 1.3 and Lightweight DITA, 1.3 offers more docs types and elements by far.
Lightweight DITA was created to provide a constrained structure that included common constraints to be used interchangeably. The ability to author common set of constraints enables multi-format authoring instead of customized for only one company. It should have the ability to move content from one place to another with ease this way, thus promoting interoperability.
Within the subset of reuse mechanisms, there will be one filtering attribute, which will be available on block elements. Conref will be available on block elements, while Keyref will be available on phase level elements.
Multimedia elements are included by design to be compatible with HTML5, and for audio and video use. This functionality will be released as a 1.3 add-on domain also later this year/early 2018. The first public review will be opening this month! There will be a chance to look at explanatory docs, DTDs, and sample files – OASIS wants the feedback!
As for DITA 2.0 planning, it’s an opportunity for architectural changes. Kristen said that for the 1.x releases, “Our hands have been tied.” For the 2.0 version, there will be more freedom. It won’t be backward compatible, but they will be working to avoid being disruptive, plan migration strategies, and not break your stuff in the process of removing deprecating areas and fix problems.

Possible DITA 2.0 items that are on the drawing board include:

New map type
Mandate a processing order
Redesign of chunking
Remove deprecated elements
Redesign packaging of grammar files
Clearer and tighter conformance statement
Deprecate or remove @copy-to
Separate base and tech content
Anything to make specialization easier and more documented
Enable specialized attributes selectively rather than globally

OASIS is needing better input from DITA community. They are figuring out ways to see how the DITA TC and the larger community can interact better. The formal OASIS mechanism currently in place – the dita-comment list – is limited. Ideas to get feedback include DITA listening sessions (underway currently), listening sessions with vendors, quarterly summaries that outlines where the work is at, and DITA TC webinars.
Kristen stated, “We don’t want to develop a new version on the standard in a vacuum!”
Starting points for participation include the OASIS Feedback site and their email archive.

Markus Wiedenmaier: “Set your content free! Work with any Format as DITA in FrameMaker 2017!”

Markus Wiedenmaier is the CEO of, a public/private cloud based Digital Transformation Service and Information Delivery Service. One solution they provide is for Adobe FrameMaker. It’s a plugin that works as a transformation service in the background. With this solution, you can simply open any Microsoft Word DOCX file in Adobe FrameMaker and it opens as a valid DITA XML document. You can now edit this document just like any other DITA document, save it as a DITA XML file or save it back as a Microsoft Word DOCX file.
The common issues with unstructured content, according to Markus, is that:

XML is not the only format
Content reuse was created by developers, engineers and lawyers
TechDocs and other departments are growing together
Processes need to be integrated in their entirety
The format “jungle” is getting more and more confusing
Information 4.0 needs new solutions

Ideally, what we want to do to make this a better situation is set the content free. We’d love to be able to edit any format with the editing tool of the writer’s choice, publish from any format to any format required, integrate any kind of data services to enhance content automatically, reduce the complexity of systems, and integrate third-party documentation (OEM) with one click.
Markus showed us how c-rex works in a live demo. He opened a customer’s Word document into FrameMaker to adjust and create a structured document, entering and tagging content via DITA markup, structuring content. He also showed how it could be done with third party content using C-Rex tools, generate it into FrameMaker, and generating in multiple formats.
The smart editing prerequisites needed to make this “dream” happen included:

FrameMaker 2015/2017
MS Word 2010-2016
Word docs in .docx format
Configuration depends on your needs account

There are several advantages in including in enterprise process. It eliminates the need for copying and pasting and big “legacy data migration projects,” so that you can just focus on your job. There is no expensive and time-consuming data conversion processes. You can integrate non-tech writers into your tech doc processes, by letting the systems work for you in the background. You don’t have to care about formats anymore!
Markus outlined the infinite possibilities with Markdown integration and more, integrating data queries through either the query of databases by simply including an SQL-snippet or similar, or by query of any data service for automated content enrichment. You could also make use of rule-based text analytics to tag certain content, and integrate your CMS with several APIs and pluggable clients. It’s up to you what’s next!

Magda Caloian, “Turn the right keys and content falls into place”

Magda Caloian, Business Consultant and DITA Specialist at German CCMS provider FCT AG, talked about her case studies done in which she showed us examples of how she did the troubleshooting. To be honest, much of this was quite high-level DITA. This was an exercise in “deep diving”, as I call it, so it was a little difficult for me to follow along with the details. But this was what I could extrapolate from the talk.
Using PDF examples, Magda suggested that the architecture of a single-source project should include the following:

Metadata and filter criteria

The FrameMaker template components that are best used are:

Master pages
Reference pages

Publication pipelines which include:

Transformation stylesheet (SXLT)
DITA-OT plugin
ExtendScript, structured app, etc.

Other templates to consider include:

Topic templates – covers, grids, tech data sheets, troubleshooting
Keymap templates
Structure snippets – list types, table parts, hazardstatement types, keydefs for text keydefs for graphics

Lessons she learned in the process were that you should know what your goals and compromises are, know your team well, beware of the exceptions to the new rules, set the guidelines, and define neutral resources.
The advantages of keeping these things all in mind allows you to keep topics neutral and ready for reuse. All instruction sets can be consistent, with fast project updates. With system integration, it lets the writers do the writing.
(Magda, if I left out any key points, be sure to add your notes to the comments section below!)

Christian Weih: “Going global with your content: With DITA and the Across Language Server”

Christian Weih, Member of the Board at Across Systems, said that his talk was not going to be about DITA, but rather about making you content superheroes! Putting content in target languages is important. Creating consistent messages, especially in MY language (meaning, whatever your personal language is) is a plus!
Customers don’t buy what they don’t understand, and don’t get products they don’t understand. You need the right tools that move quickly and at cost. You need to use these tools and techniques in the translation process. It’s about picking the best processes and tools to create viable information.
Out of the box integration tools are easier to process using the translation process found in the Across Language Server TMS (the product that Christian’s company makes).
You prepare information because you want to ensure that the quality of the translation is as the source. Not everyone knows DITA with all those who are involved in the translation process. Every translation project provides enough information at the end of the food chain to make the best translation on the source and same quality in the translation supply chain. You need to take an interest in the opposite side of things, especially if you have to look at security issues as well. Make sure everyone is on the same page.
Machine translations can have problems with context, as can some humans. The process is expensive and slow from translating DITA. But the advantage of this approach is that you can get your translation faster. Translations are usually concerned with the time, money, or quality. Usually you could only achieve two of these concerns, and usually quality loses.
Clients usually distribute entire DITAMAPs, not just topics. Translation memory (TM) is used, and only has to work on the new stuff that way. Bigger content helps with the TM, so it makes it easier for consistency, and can make it faster in that respect.
DITAMAPs can be created and saved. In TM systems, you select the DITAMAP to pull it into the system, define source languages and translations, then that tells the TMS that DITA is coming in a document settings template to define what text is relevant for translation.
QA is built into the process to prevent errors, as well as structure attribute settings. The TM is smart enough to look for context measures and structure content measures, so past headers are translated the same as the current ones. It takes away the fear from translation projects! After the project is created, it becomes automated and part of the supply chain, making sure that the people assigned to do this work must be involved with that project to receive the output. All participants get the same content in the same way as needed in a secure way.
This makes DITA for the translation side as smooth as possible, and doesn’t endanger the files, allowing automation as much as possible without losing quality or context in the process.

Andrea L. Ames: “Structured Content … Sexy? Strategic? Or Both?”

The last presentation on Day 2 of Adobe DITA World 2017 was from Andrea L. Ames, CEO at Idyll Point Group. It was fun, as we looked at a lot of animal photos in the process of talking about how much creating content strategy for structured content is a lot like a “dating” process.
Structured content is strategic and sexy because there are phases of first attraction, evaluation, commitment to lead to the “afterglow” – no regrets.
Initially, it’s superficial, or the first attraction. We look at things to see if they are fashionable and trendy, by working to separate message from the presentation, and see how reuse is used over many looks. We also look at the financial stability of what we’re looking at. Is it efficient? Is it cost effective?
If our “suitor” passes this first phase, we move on to the “getting to know you” phase, otherwise known as evaluation. We look to see if there are many interests in common, such whether they are omnichannel. We want them to be adaptive and flexible with our needs, and be good communicators, such as supporting translation.
Once we’ve been initially attracted, and done our due diligence in our evaluation, we are ready to settle down. We are ready for a commitment. Within a commitment, we want something that will grow with us, or be scalable. We want to gain wisdom (from age and experience) that provides stability. We want also want fidelity of what we’re being given, so that it’s reusable.
After we’ve made a commitment, we want to, as Andrea calls is, “Achieving the afterglow.” In other words, we want to avoid regret! She assured us that if we follow both the sexy and the strategic in the buildup to doing things with content as outlined so far, we’ll have complete satisfaction. This is done through:

Consistency through models

External impact of models – define and deliver what’s appropriate for your customer (think about the creation of a tutorial)
Internal impact of models – ensure consistency in content creation

Metadata – being clever

The holy grail of content experience who have a structured content approach; if you leverage a great metadata model that matches up with your customer needs and the consistency models, it works!
External impact of metadata – enable dynamic, custom delivery
Internal impact of metadata – Enhance and improve findability of content components and thus reuse

Day 2 Conclusions
As mentioned before, the Chat Room during all the sessions was very active today. By the end of the day, good questions for the presenters were interspersed with silly humor from participants who were actively enjoying the conference.
I’m looking forward to seeing how Day 3 goes. See you on Day 3 of Adobe DITA World 2017!