Ryogo Toyoda: un omaggio agli anni ’80 e al 3D

Questo mese, mentre riflettiamo su quale sia il futuro del design 3D, abbiamo deciso di incontrare un illustratore e designer 3D all’avanguardia, Ryogo Toyoda, che ci ha parlato di una delle sue composizioni più recenti, ‘3D’.
Nell’ispirazione e nel processo
“Trovo sempre l’ispirazione nei videogiochi anni 80 e 90” ci ha raccontato Ryogo. “Sto creando questa composizione e volevo utilizzare uno stile colorato ed esprimere la grande selezione degli asset di Adobe Stock.”
Ryogo ha utilizzato modelli e strutture 3D e messo insieme una grande varietà di stili di immagini, tra cui le icone vettoriali che ha creato con Illustrator e ha convertito in oggetti 3D, così come un insieme di forme organiche e non-organiche. L’asset stock preferito di Ryogo per questa composizione è stato un motivo con foglie tropicali fatte a mano perché “si tratta di uno stile pittorico che io non posso creare,” ha spiegato.
La progettazione in tre dimensioni.
Secondo Ryogo, creare in 3D è molto simile a creare in 2D. “Come sempre, in questo pezzo mi sono concentrato sul controllo del contrasto tra densità e spazio, cosa che succede anche nella progettazione in 2D. Mi sono anche focalizzato sull’equilibrio tra i colori vivaci e lo sfondo scuro”, spiega. “Tuttavia, quando si crea in 3D è importante tenere in considerazione l’aspetto in tre dimensioni e capire come esaltare il carattere dei materiali, della luce e dell’ombra”.

Lavorare con risorse stock
Quando gli abbiamo chiesto dei consigli per i designer che lavorano con risorse stock, Ryogo ha sottolineato l’importanza della pianificazione e dello stile. “Bisogna immaginare il risultato finale durante la fase iniziale del progetto, così sarà possibile gestire il processo e il costo del lavoro”, dice. “Consiglio anche di utilizzare degli asset che non sarebbe possibile creare in poco tempo o che hanno uno stile lontano dalla nostra sensibilità.”
Per ulteriori informazioni, date un’occhiata al lavoro di Ryogo, scoprite cosa succede nel design 3D, e seguite gli artisti 3D su Instagram e Behance. Potete anche visitare Adobe Stock’s marketplace of 3D assets e la galleria di questo mese dedicata allo stock 3D.

Take a Look at Some of Our MAX Partner Sessions

One of the challenges at MAX is building your session schedule — there’s so much to choose from! We wanted to highlight some of our partner sessions, and encourage you to sign up for one or more if you’ll be attending in person this year. Here are just a few we recommend, hosted by experts from companies you’re sure to recognize:

Transparent Teams: Driving Alignment Through an Open Creative Process – Presented by Dropbox
Modern creative teams are made up of a fluid workforce: freelancers, vendors, agencies, and cross-functional in-house teams collaborating across the globe. Dropbox believes that a transparent process is the key to keeping teams in sync, improving the flow of work, and bringing the best ideas to life.Whether you’re a designer, marketer, or someone who manages creative teams, attend this session to learn how you can:

Use an open design process to launch new products and campaigns
Inspire your colleagues to unleash their creative energy, generate new ideas, and uncover better insights
Transition your team to a new way of working

How Far Can Design Stretch? Mixed Reality? AI? 2D/3D? – Presented by Albert Shum – CVP, Microsoft
We have a big canvas to stretch in digital design. Web, mobile, PC, tablet, collaboration displays, and mixed reality all vie for attention in an increasingly immersive world. We use touch, gesture, voice, inking, keyboard, mouse, dial, and gaze as inputs. How do we create engaging design for all of these experiences, at so many cross points? How do we keep people, rather than tech, at the heart of things?Join Albert Shum, Microsoft CVP of Design, for an inspiring discussion about bringing creativity to the future of design thinking, and building a system that will scale.In this session, we’ll share:

Historical context on UX design and the evolution of emerging UX
A glimpse into Microsoft’s Fluent Design System
What you can do to scale your designs

Vimeo Staff Picks: Behind the Scenes – Presented by Ian Durkin, Sr. Curator, Video
Every day, tens of thousands of videos are uploaded to Vimeo. Only five are chosen as Vimeo Staff Picks. Founded in 2008, Vimeo Staff Picks has emerged as one of the preeminent channels for online video and one of the most coveted awards for young filmmakers, having helped launch the careers of many celebrated directors. Come for an in-depth look at the process behind curating a daily showcase of the best short narratives, documentaries, animations, and music videos on the Internet.
From Concept to Console: How Design Drives World’s Best-Selling Video Games – Presented in Partnership with Wrike, Sony and One Pixel Brush
Game on! Compelling creative designs are critical to today’s gaming experience. Follow the game journey from concept to studio to marketing and post launch. Hear stories and lessons learned from the legendary creative minds behind the visual styles of Call of Duty: Infinite Warfare, Uncharted: The Lost Legacy, The Last of Us Part II, and others. Join us for a panel discussion with top leaders in the gaming industry.In this session, you will:

See how One Pixel Brush concept art studio uses compelling visual design to inspire development teams
Discover how Sony PlayStation’s Creative Studios turns visual concepts into gaming nirvana
Hear from PlayStation Marketing’s head of creative design about the design roles and processes involved in launching and marketing new releases
Learn how to leverage creative design to create customer buzz, loyalty, and community

Mapping Your Path to Great Design – Presented by ESRI

Maps are everywhere, see how to use them for brand reinforcement and visual storytelling. As data visualization diagrams, maps have been around for thousands of years. Today, with the recent explosion of location-based information, clients and customers want maps for all forms of digital media and marketing. Join Esri, the world leader in analytics and mapping software to:

Learn the basics of cartography — the art and science of making maps
Discover how to design with data-driven maps directly in Illustrator, Photoshop, and Adobe Muse
Explore location-based analysis and visualization techniques, including 3D and video

Creating Virtual Reality Video – Presented by Google

Virtual reality opens up new ways to create and experience immersive storytelling. Join VR creators Gary Hustwit, Jessica Edwards, Ben Ross, and Brittany Neff as they show work they’ve created with Google, Oculus, the Wall Street Journal, and others and discuss techniques for making compelling 360 video. If you’ve been wondering how to make VR content or have already started experimenting with this new medium, this session is for you.In this session, you’ll learn:
Creative approaches: What types of stories work best in VR?
How to get started shooting 360 video, both monoscopic and stereoscopic
Differences between the standard video and 360 video editing workflow
How to capture and use audio in VR content
What VR tools are now part of Adobe Premiere Pro and After Effects

Unleashing the Power of Creative Cloud with Artist Android Jones

Join Android Jones along with reps from HP and NVIDIA on a creative storytelling experience. Android is an artist and digital painter, known for his many layered, immersive designs and live performances. He participated in the Grateful Dead Fare Thee Well tour, and his work has been projected on iconic landmarks across the globe including the Sydney Opera House and the Empire State Building.In this session, Android will:
Demonstrate his creative workflow
Share his journey as an artist and answer any questions you’ve been dying to ask
Discuss how HP and NVIDIA technologies can help push Adobe Creative Cloud to new limits

Introducing the 2017 Adobe MAX European Insiders

Introducing the 2017 Adobe MAX European Insiders
We’re thrilled to share the talented and inspiring group of creatives from across Europe we’ve invited to be this year’s Adobe MAX Insiders at Adobe MAX, the world’s largest creative conference. Serving as the eyes and ears on the ground for the community members who can’t join us, the MAX Insiders will be sharing their experience at the general sessions, sneaks, parties and more on social media. They’ll even get behind-the-scenes at some exclusive events.
Get to know a bit about the MAX Insiders below, and be sure to follow their adventures at #AdobeMAX as they experience our creativity event of the year!


Insider name
Bert Dries
Fabien Barral
Olivier Huard
Anna Heupel




Insider name
Kristina Hader
Marvin Ströter
Dennis Schuster
Stefan Kunz



Insider name
Cindy Vriend
Natalie Federmann Foss
Max Gecke
Daniel Saunders



Insider name
Rich McCor
Alex Bec
Thomas Kakareko
Nathalie Geffroy



Insider name
Johann Brangeon

Watch the MAX Keynote and more on live online on October 18 to find out what’s next for Adobe Creative Cloud. Sign up here.
And don’t forget you can also get Adobe MAX updates on Twitter via @AdobeMAX and @AdobeUK

Livefyre.require([“app-embed#1.0.10”], function (appEmbed) {appEmbed.loadAll().done(function(embed) {embed = embed[0];if (embed.el.onload && embed.getConfig) {embed.el.onload(embed.getConfig());}});});

Ils vivront Adobe MAX 2017 en live : rencontrez nos Insiders européens

Nous avons le plaisir de vous présenter le talentueux groupe de créatifs européens que nous avons invité en tant qu’Adobe MAX Insiders pour cette édition 2017. Véritables relais pour celles et ceux qui ne pourront être présents à l’évènement, les MAX Insiders partageront sur les réseaux sociaux leur expérience lors des keynotes, des sneaks, des soirées et bien plus encore… Ils auront également l’occasion de se glisser dans les coulisses de certains événements exclusifs pendant Adobe MAX, le plus grand événement au monde dédié à la créativité.
Fabien Barral, Olivier Huard, Olivier Saint-Léger et Nathalie Geffroy seront nos Insiders français à Adobe MAX. Ils couvriront l’événement sur leurs réseaux sociaux et blogs respectifs, pour vous faire vivre les conférences, vous inviter dans les coulisses et partager d’autres anecdotes tout droit venues de la Californie ! Vous retrouverez aussi sur notre blog l’actualité d’Adobe MAX.
Pour ne rien manquer de leurs contenus relatifs à Adobe MAX, le hub initié par Olivier Huard sera votre meilleur compagnon.
Suivez-les ici : – Fabien Barral aka Mr Cup : Facebook, Twitter, Blog, Instagram – Olivier Huard : Twitter, Blog – Olivier Saint-Léger : Twitter, Blog – Nathalie Geffroy : Facebook, Twitter, Instagram
Ils auront également l’occasion de vous débriefer en live sur Facebook, et répondront à toutes vos questions sur notre page Facebook. Restez à l’affût sur nos réseaux sociaux pour y participer !
Découvrez tous les MAX Insiders ci-dessous, et n’oubliez pas de suivre leurs aventures sur #AdobeMAX lors de notre événement le plus créatif de l’année !


Nom des Insiders
Bert Dries
Fabien Barral
Olivier Huard
Anna Heupel



Nom des Insiders
Kristina Hader
Marvin Ströter
Dennis Schuster
Stefan Kunz



Nom des Insiders
Cindy Vriend
Natalie Federmann Foss
Max Gecke
Daniel Saunders



Nom des Insiders
Rich McCor
Alex Bec
Thomas Kakareko
Nathalie Geffroy



Nom des Insiders
Johann Brangeon

Les 18 et 19 octobre, suivez les keynotes Adobe MAX depuis chez vous pour découvrir nos dernières nouveautés et vous inspirer des plus grands artistes de la scène créative. Inscrivez-vous ici.
En attendant, n’oubliez pas que vous pouvez aussi obtenir des infos sur Adobe MAX via @AdobeMAX et #AdobeMAX sur Twitter !

Adobe DITA World 2017 – Day 2 Summary by Danielle M. Villegas


Hello again, everyone! Your Adobe DITA World 2017 “resident blogger” Danielle M. Villegas is here again. I hope you liked yesterday’s summaries. There was a lot going on yesterday, and still a lot today! After some technical difficulties on my end today, and seemingly some fixed audio issues on the call today, we got started with a quick round call of “Where in the world?” asking participants to chime in the Chat Pod in Adobe Connect where they were connecting from. Participants hailed from Asia, Europe, North and South America! Stefan and Dustin told us that the registrations for DITA World also went up since the registration for today and tomorrow were still open. DITA World now has over 2,500 registrants which makes it the second biggest Technical Communication Conference in the world (only the tekom / tcworld conference is bigger)!
The day was actually filled with a lot of talk in the Chat Pod, not only with questions from participants and speakers answering them after their presentation time was up but just fun chatter along the way. (Tech comm people have a great sense of humor, for sure!)
Again, Adobe TechComm Evangelist Stefan Gentz and Adobe Solutions Consulting Manager Dustin Vaughn opened the room and welcomed the audience. At 9 am Pacific Time we started with our Day 2 keynote speaker, Rahel Bailie!
I wrote as many notes as I could keep up with from the presentations, as again, we were all hit with a tsunami of information that it could be overwhelming at times! I’ve tried to hit the highlights below.

In this post:

[Keynote] Rahel Anne Bailie: “Out of the Guest Room into the Master Suite: Committing to a Relationship with Structured Content””
Noz Urbina: “AI Chatbots – Just another channel?”
Kristen James-Eberlein, “The State of DITA: Ten Years and More”
Markus Wiedenmaier: “Set your content free! Work with any Format as DITA in FrameMaker 2017!”
Magda Caloian: “Turn the right keys and content falls into place”
Christian Weih: “Going global with your content: With DITA and the Across Language Server”
Andrea L. Ames: “Structured Content … Sexy? Strategic? Or Both?”

Keynote from Rahel Anne Bailie: “Out of the Guest Room into the Master Suite: Committing to a Relationship with Structured Content”

Rahel Anne Bailie, Content Architect at Scroll LLP, kicked off the day with the keynote “Out of the Guest Room into the Master Suite: Committing to a Relationship with Structured Content.” She emphasized that structured content is no longer a luxury. It’s becoming a necessity!
She continued by giving an overview of what structure is. She felt that the best definition was that it’s an arrangement of, and relations between, the parts or elements of something complex. She pointed out that normal HTML tagging doesn’t specify enough for machines to read it, but by using DITA tagging, it becomes more machine-readable. She also pointed out that semantic content is intelligent content, and recommended that participants check the book, Intelligent Content in The Content Wrangler series for more information.
Rahel discussed the benefits of Intelligent Content as being the following:

Discoverable – In practical terms, content is automatically discoverable, which means that search engines can find your content because it understands not only the text, but also the intent of your meaning.
Re-usable – Content can be reused across topics by using content components that can be used in “mix or match” ways to build new topics or output to different formats.
Reconfigurable – Content can be repurposed in different contexts in a way that content can be automatically sorted or filtered, as well as included or excluded in different ways.
Adaptable – Content can adapt to specific contexts by being served up in different ways, depending on the device, audience, market, or other personalization variables.

The big question is, though–are you using content to promote your brand? It should be, as it should be constant across your brand. Content is the is the front door to your digital presence, how consumers understand your brand, provides critical aspects of acquisition, paves the way to customer retention, and helps with renewal.
In fact, the cost of acquisition is five times that of the cost put towards customer retention! We should be working to keep existing customers happy. 89% of companies see customer experience as a key factor in driving customer loyalty and retention. Customers want support, not bling, want to be able to use the product, get support no matter want. They want it to work! The problem is, 80% of companies think they deliver a superior customer experience, but only 8% of their customers agree.
Why should we make a commitment to structured content? Structured content shouldn’t treat like a temporary fix. We need to articulate the “why” to the budget drivers and explain that it supports many of the major business drivers.
Those drivers include the following:

Brand loyalty – Getting people to buy product or service and stay. Brand loyalty equals customer trust. Commonalities among successful brands include uniqueness, consistency, accuracy, relatability, and trustworthiness. Within those traits, structured content helps focuses on accuracy, consistency, and relatability.
Market Expansion – Markets need to keep expanding next door or around the world, keeping in mind that there are many audience variants.
Risk management – Digital evidence shows what works and what doesn’t.
Customer Retention – 80 % of future profits come from 20 % of existing customers. One of the top reasons for customers switching providers has to do with poor content communication.
Increase in capacity – This isn’t a topic often discussed. Delivery is the easiest expense, and the costs go up with each level up of complexity.

So how can you automate content to add value? Technology, governance, people, and uses all play a part. When creating high-performance teams, high-performance tools need to be provided to them as well, such as CCMS, XML editors, or products like FrameMaker.
Structure makes content work better!

Noz Urbina: “AI Chatbots – Just another channel?”

This presentation definitely stirred up a lot of conversation and took some of yesterday’s discussions about chatbots to the next level.
Noz Urbina, CEO at Urbina Consulting, started his presentation asking us, “How are we going to fit this new channel in our ecosystem of already too many channels?” His answer was that his consulting firm’s mission states the best approach to this: “We help organizations have the kind of relationships with people that people have with each other.”
The trick is that we don’t want this new chat thing to become too disjointed because a bad initial experience will yield less adaption.
Noz provided a LOT of concepts, types, and terms for much of his talk, which I’ll present most of my notes, but many of his slides explained things better.
The term “conversational interfaces” is an umbrella term for chatbots, personal assistants, voice-controlled user interfaces, as well as hybrids. The hybrids can be modalities. They connect humans to machines (like Amazon Echo’s Alexa and Microsoft’s Cortana). When a user asks or enters, “What’s the temperature?”-> Bot says, “The temperature right now in Valencia is 23” → bot shows: a chart with weather info. The Intent (get weather) is within the Grammar of known commands. The Parameter slot is the city, and the Slot type is the city entity.
Between the request and the response, the bot either recognizes a specific command from predefined grammar or uses natural language processing (NLP) to parse the input to determine the intent. A Grammar can have alternative phrasing, such as in the case of requesting the weather forecast, the “get weather” intent could accept both “what is the weather like” and “what’s the temperature” without NLP. When a bot is unable to answer within its current modality and is forced to make the user switch, then this is a “cliff”. If the bot is hitting too many cliffs, the protocol needs to hand over the request to a person.
When talking about AI (artificial intelligence), we need to talk about how the information is stored and presented. There is machine learning, which is when NLP and algorithms allow an AI to be trained and improve in a semi-guided way. One technique includes the decision learning tree. The decision learning tree is when all interactions are scripted and must fall into certain sequence. The logical equivalent of this is a touch-tone phone interface, but chatbots are driven by words instead of button tones on the phone. The system doesn’t learn the information; it’s just recoded.
The takeaways from the terminology are that chatbot responses are based on their ability to recognize specific, supported intents. Using NLP allows you to go beyond a limited grammar and accept to wider range of questions. Decision trees are a simple way to get started. Every task in your user task analysis could be an intent within your grammar.
To make the connection between chatbots and the content, typed chunks are ideal!
In one example he showed, the DITA elements <concept> and <shortdesc> are used, with content and intent broken down within these. It’s a case in which the labelling is essential. Markup can apply to multiple channels, and if tagged properly, can work anywhere. Noz tried it on his Google chatbots with an example of something he wrote, and it worked! Structured content can be simplified to be used as content that could be understand better for chatbots. More markup created better, more responsive answers. However, just because you are doing DITA, it doesn’t mean you are chatbot ready!
Pure structural markup is just the shape of the container for the content. The information type makes up the containers that describe universal base types of information. It can be highly semantic, which can translate into explicit meaning in the markup. The trick is to use the right mix of markup!
Chatbots will increase the need for specialization or judicious use of attributes added to our markup. You can be DITA specific, but if you’re not already carefully using short description and even abstracts. The good news is that if you are using DITA, you’re probably much closer than you were before DITA. The bad news is that specializing is expensive and has risks.
The architectures and implementation on this is based mostly of the platform, and can consist of many moving parts.
Technical communicators can prepare themselves for this kind of content by doing the following tasks:

Do a top task analysis. Build a comprehensive list of user tasks, and choose which you’ll support with bots.

Follow with customer journey mapping.
Map complex, multi-touchpoint experience to content, align across silos around customer experience, and derive actionable insights and content standard improvements.
Do a stage by stage analysis of what questions user may ask when achieving objectives.

Build or refine taxonomies

The more you can detect about user context (the sum total of all the user’s situation al parameters), the faster you’ll be able to target the right answers.
Include domain modelling.
Be sure to include knowledge graphing. Knowledge graphs support cognitive reasoning systems making it easier for bots to find the right content. (Google for OWL2 or SKOS standards for more information.)
Improve your structures (the markup spoken of earlier).

General implementation tips:

A textual chatbot should consider voice restrictions, and then be expanded as needed.

All reusable content style guidelines apply. This includes:

Voice, style and terminology. Consistency is vital!
No positional language (“see below,” etc.)
Avoid soft dependencies (“As read in chapter 2,” etc.)

All usual innovation rules apply. Set realistic expectations, then meet them. Test first internally, then in a friendly, small group of external users. If all goes well, then test widely.

So, you can see, chatbots should be just another channel! The problem is most organizations won’t have content properly written for them. DITA is much closer for writing the content, but most won’t have used it to full potential. Basic decision trees with every task as an intent aren’t scalable, but we need to be able to publish knowledge maps to chatbots systems, so it’s a good place to start.
Noz answered a LOT of questions in the chat room after his chat as the next presenter was setting up, and the chat was pretty busy during his talk, so this is a very hot topic!

Kristen James-Eberlein: “The State of DITA: Ten Years and More”

Kristen James-Eberlein, Chair of the OASIS DITA Technical Committee, is a consultant in the field. She has been involved since 2008, and participating by co-editing DITA 1.2 and 1.3.
Who develops DITA? OASIS – the Organization for the Advancement of Structured Information Standards – does! It’s a non-profit organization that drives the development convergence and adoption of open standards for the global information society.
Things to know about DITA and OASIS:

DITA is an open standard. Open standards enable interoperability. With DITA, interoperability it can work with different applications and companies.
There are eight consultants who represent OASIS to develop the standards.
DITA started in the 1990s at IBM. In March 2001, DITA was made available on IBM for public use on DeveloperWorks.
The OASIS committee formed in the spring of 2004; IBM donated the DTDs and docs, but DITA-OT went elsewhere (See our talk with Robert from Day 1.)

Kristen went through the history of the DITA releases, so we could see when certain features came out.
In DITA 1.3, this broke with tradition, as the protocol was released in 3 packages: a base edition, a technical content edition, and an all-inclusive edition for learning and training purposes. These releases were a result of the growth of DITA and emergence of new audiences. They focused on topic and map as the CORE doc types. The desire was to provide users with targeted packages that better contain what they need – no more, no less. As of October 2016, DITA 1.3 Errata 01 was released, with Errata 02 scheduled for later this year.
Kristen feels that the future of DITA lies between the offering of Lightweight DITA, and planning for DITA 2.0.
Why would we want to use Lightweight DITA? We’d want to use it due to the ease of DITA adoption and implementation. Full DITA is much more than many need, especially in new markets such as marketing and medical information. We need to be able to map to content in other forms, such as HTML5, Markdown, or whatever will arrive in future. Adoption of this lightweight version will foster growth of low-cost tools and applications.
Design will be essential, as it will include fewer info types, namely topics and maps only. It will include a smaller element set, a stricter content model, a subset of reuse mechanisms, and a new multimedia domain. Between DITA 1.3 and Lightweight DITA, 1.3 offers more docs types and elements by far.
Lightweight DITA was created to provide a constrained structure that included common constraints to be used interchangeably. The ability to author common set of constraints enables multi-format authoring instead of customized for only one company. It should have the ability to move content from one place to another with ease this way, thus promoting interoperability.
Within the subset of reuse mechanisms, there will be one filtering attribute, which will be available on block elements. Conref will be available on block elements, while Keyref will be available on phase level elements.
Multimedia elements are included by design to be compatible with HTML5, and for audio and video use. This functionality will be released as a 1.3 add-on domain also later this year/early 2018. The first public review will be opening this month! There will be a chance to look at explanatory docs, DTDs, and sample files – OASIS wants the feedback!
As for DITA 2.0 planning, it’s an opportunity for architectural changes. Kristen said that for the 1.x releases, “Our hands have been tied.” For the 2.0 version, there will be more freedom. It won’t be backward compatible, but they will be working to avoid being disruptive, plan migration strategies, and not break your stuff in the process of removing deprecating areas and fix problems.

Possible DITA 2.0 items that are on the drawing board include:

New map type
Mandate a processing order
Redesign of chunking
Remove deprecated elements
Redesign packaging of grammar files
Clearer and tighter conformance statement
Deprecate or remove @copy-to
Separate base and tech content
Anything to make specialization easier and more documented
Enable specialized attributes selectively rather than globally

OASIS is needing better input from DITA community. They are figuring out ways to see how the DITA TC and the larger community can interact better. The formal OASIS mechanism currently in place – the dita-comment list – is limited. Ideas to get feedback include DITA listening sessions (underway currently), listening sessions with vendors, quarterly summaries that outlines where the work is at, and DITA TC webinars.
Kristen stated, “We don’t want to develop a new version on the standard in a vacuum!”
Starting points for participation include the OASIS Feedback site and their email archive.

Markus Wiedenmaier: “Set your content free! Work with any Format as DITA in FrameMaker 2017!”

Markus Wiedenmaier is the CEO of c-rex.net, a public/private cloud based Digital Transformation Service and Information Delivery Service. One solution they provide is for Adobe FrameMaker. It’s a plugin that works as a transformation service in the background. With this solution, you can simply open any Microsoft Word DOCX file in Adobe FrameMaker and it opens as a valid DITA XML document. You can now edit this document just like any other DITA document, save it as a DITA XML file or save it back as a Microsoft Word DOCX file.
The common issues with unstructured content, according to Markus, is that:

XML is not the only format
Content reuse was created by developers, engineers and lawyers
TechDocs and other departments are growing together
Processes need to be integrated in their entirety
The format “jungle” is getting more and more confusing
Information 4.0 needs new solutions

Ideally, what we want to do to make this a better situation is set the content free. We’d love to be able to edit any format with the editing tool of the writer’s choice, publish from any format to any format required, integrate any kind of data services to enhance content automatically, reduce the complexity of systems, and integrate third-party documentation (OEM) with one click.
Markus showed us how c-rex works in a live demo. He opened a customer’s Word document into FrameMaker to adjust and create a structured document, entering and tagging content via DITA markup, structuring content. He also showed how it could be done with third party content using C-Rex tools, generate it into FrameMaker, and generating in multiple formats.
The smart editing prerequisites needed to make this “dream” happen included:

FrameMaker 2015/2017
MS Word 2010-2016
Word docs in .docx format
Configuration depends on your needs
c-rex.net account

There are several advantages in including c-rex.net in enterprise process. It eliminates the need for copying and pasting and big “legacy data migration projects,” so that you can just focus on your job. There is no expensive and time-consuming data conversion processes. You can integrate non-tech writers into your tech doc processes, by letting the systems work for you in the background. You don’t have to care about formats anymore!
Markus outlined the infinite possibilities with Markdown integration and more, integrating data queries through either the query of databases by simply including an SQL-snippet or similar, or by query of any data service for automated content enrichment. You could also make use of rule-based text analytics to tag certain content, and integrate your CMS with several APIs and pluggable clients. It’s up to you what’s next!

Magda Caloian, “Turn the right keys and content falls into place”

Magda Caloian, Business Consultant and DITA Specialist at German CCMS provider FCT AG, talked about her case studies done in which she showed us examples of how she did the troubleshooting. To be honest, much of this was quite high-level DITA. This was an exercise in “deep diving”, as I call it, so it was a little difficult for me to follow along with the details. But this was what I could extrapolate from the talk.
Using PDF examples, Magda suggested that the architecture of a single-source project should include the following:

Metadata and filter criteria

The FrameMaker template components that are best used are:

Master pages
Reference pages

Publication pipelines which include:

Transformation stylesheet (SXLT)
DITA-OT plugin
ExtendScript, structured app, etc.

Other templates to consider include:

Topic templates – covers, grids, tech data sheets, troubleshooting
Keymap templates
Structure snippets – list types, table parts, hazardstatement types, keydefs for text keydefs for graphics

Lessons she learned in the process were that you should know what your goals and compromises are, know your team well, beware of the exceptions to the new rules, set the guidelines, and define neutral resources.
The advantages of keeping these things all in mind allows you to keep topics neutral and ready for reuse. All instruction sets can be consistent, with fast project updates. With system integration, it lets the writers do the writing.
(Magda, if I left out any key points, be sure to add your notes to the comments section below!)

Christian Weih: “Going global with your content: With DITA and the Across Language Server”

Christian Weih, Member of the Board at Across Systems, said that his talk was not going to be about DITA, but rather about making you content superheroes! Putting content in target languages is important. Creating consistent messages, especially in MY language (meaning, whatever your personal language is) is a plus!
Customers don’t buy what they don’t understand, and don’t get products they don’t understand. You need the right tools that move quickly and at cost. You need to use these tools and techniques in the translation process. It’s about picking the best processes and tools to create viable information.
Out of the box integration tools are easier to process using the translation process found in the Across Language Server TMS (the product that Christian’s company makes).
You prepare information because you want to ensure that the quality of the translation is as the source. Not everyone knows DITA with all those who are involved in the translation process. Every translation project provides enough information at the end of the food chain to make the best translation on the source and same quality in the translation supply chain. You need to take an interest in the opposite side of things, especially if you have to look at security issues as well. Make sure everyone is on the same page.
Machine translations can have problems with context, as can some humans. The process is expensive and slow from translating DITA. But the advantage of this approach is that you can get your translation faster. Translations are usually concerned with the time, money, or quality. Usually you could only achieve two of these concerns, and usually quality loses.
Clients usually distribute entire DITAMAPs, not just topics. Translation memory (TM) is used, and only has to work on the new stuff that way. Bigger content helps with the TM, so it makes it easier for consistency, and can make it faster in that respect.
DITAMAPs can be created and saved. In TM systems, you select the DITAMAP to pull it into the system, define source languages and translations, then that tells the TMS that DITA is coming in a document settings template to define what text is relevant for translation.
QA is built into the process to prevent errors, as well as structure attribute settings. The TM is smart enough to look for context measures and structure content measures, so past headers are translated the same as the current ones. It takes away the fear from translation projects! After the project is created, it becomes automated and part of the supply chain, making sure that the people assigned to do this work must be involved with that project to receive the output. All participants get the same content in the same way as needed in a secure way.
This makes DITA for the translation side as smooth as possible, and doesn’t endanger the files, allowing automation as much as possible without losing quality or context in the process.

Andrea L. Ames: “Structured Content … Sexy? Strategic? Or Both?”

The last presentation on Day 2 of Adobe DITA World 2017 was from Andrea L. Ames, CEO at Idyll Point Group. It was fun, as we looked at a lot of animal photos in the process of talking about how much creating content strategy for structured content is a lot like a “dating” process.
Structured content is strategic and sexy because there are phases of first attraction, evaluation, commitment to lead to the “afterglow” – no regrets.
Initially, it’s superficial, or the first attraction. We look at things to see if they are fashionable and trendy, by working to separate message from the presentation, and see how reuse is used over many looks. We also look at the financial stability of what we’re looking at. Is it efficient? Is it cost effective?
If our “suitor” passes this first phase, we move on to the “getting to know you” phase, otherwise known as evaluation. We look to see if there are many interests in common, such whether they are omnichannel. We want them to be adaptive and flexible with our needs, and be good communicators, such as supporting translation.
Once we’ve been initially attracted, and done our due diligence in our evaluation, we are ready to settle down. We are ready for a commitment. Within a commitment, we want something that will grow with us, or be scalable. We want to gain wisdom (from age and experience) that provides stability. We want also want fidelity of what we’re being given, so that it’s reusable.
After we’ve made a commitment, we want to, as Andrea calls is, “Achieving the afterglow.” In other words, we want to avoid regret! She assured us that if we follow both the sexy and the strategic in the buildup to doing things with content as outlined so far, we’ll have complete satisfaction. This is done through:

Consistency through models

External impact of models – define and deliver what’s appropriate for your customer (think about the creation of a tutorial)
Internal impact of models – ensure consistency in content creation

Metadata – being clever

The holy grail of content experience who have a structured content approach; if you leverage a great metadata model that matches up with your customer needs and the consistency models, it works!
External impact of metadata – enable dynamic, custom delivery
Internal impact of metadata – Enhance and improve findability of content components and thus reuse

Day 2 Conclusions
As mentioned before, the Chat Room during all the sessions was very active today. By the end of the day, good questions for the presenters were interspersed with silly humor from participants who were actively enjoying the conference.
I’m looking forward to seeing how Day 3 goes. See you on Day 3 of Adobe DITA World 2017!

#AdobeMAX いよいよ来週開催 @nobi @fladdict @kazuch0924 も参加!2017年のMAX Insidersを紹介 #AdobeMAXJP

毎年恒例となったクリエイティブの祭典「Adobe MAX」。世界62か国から12000人のクリエイターたちが、アドビの最新情報を求めて10月16日から20日までラスベガスに集結!
アドビのエグゼクティブや製品エバンジェリストらによるキーノートから、「アイアンマン」「アベンジャーズ」シリーズの監督兼俳優であるジョンファブローや、大ヒット曲「Uptown Funk」で知られるミュージックプロデューサーのマークロンソンなど、一流のクリエイターによるインスピレーショナルなトーク、また、アドビの最先端技術のチラ見せ「Sneaks(スニーク)」、クリエイティブツールのエキスパートによる300を超えるハンズオンセッションなど盛りだくさんの内容が予定されています。
そのAdobe MAXに日本から参加する「MAX Insiders 」として、ソーシャルメディアインフルエンサーや若手アーティストの8名をアドビは招待しています。彼らがブログやYouTube、Twitterなど、さまざまなメディアからAdobe MAXの情報を速報でお届けします。それでは、参加されるMAX Insiders(順不同)をご紹介します。

@nobi さんことソーシャルインフルエンサーの林信行さん。アドビ製品のお気に入りは「Adobe Stock」(一番恩恵を受けているサービスだそう)。アプリは、オーディオ編集ソフトのAudition。Adobe MAXで楽しみにしているのはクリエイターによる基調講演だそう。「2年ほど前からプロクリエイターの制作プラットフォームがどんどんモバイルに移行している。今年 #MakeItOnMobile がさらにどこまで進むかが楽しみにしています!」とコメント。ラスベガスの会場から速報で届けられる @nobi さんのツイートに注目しよう。

UI/UXデザイン界にこの人あり!リツイートされるとサーバーが落ちるという絶大な影響力がある @fladdict こと深津貴之さんが初めてAdobe MAXに参戦。お気に入りの製品はPhotoshop、Illustrator、XD。楽しみにしているセッションはAI(人工知能)関連。「AI企業としてのアドビにとても期待してます。ほかのプレイヤーとはまったく異なった角度から、提案される新しい画像認識のAIが発表されるか注目しています!」と語る、深津さんが見るアドビの現在と未来は!?
100万フォロワーを誇る福井弁なまりの人気YouTuber。カズチャンネルさんが初Adobe MAXに参加します。普段からAdobe Premiere Proで動画を編集して番組作りをしている、カズチャンネルさんが見るアドビの最先端の技術とは!?お気に入りの製品はPhotoshop。初めての参加なので全部のイベントを楽しみにしているそうです。「会場の楽しい雰囲気を視聴者さんに届けるぞ!」と意気込んでいます。
サンフランシスコ在住のソフトウェアエンジニア。三年連続Adobe MAXに参戦!ガジェットを自腹で散財しつつ日々の生活を動画で発信するYouTubeチャンネル「DRIKIN VLOG」や、ネット業界で人気のポッドキャスト「backspace.fm」を運営。お気に入りの製品はPremiere ProとAudition。注目のセッションは「Premiere Proをはじめとする動画編集ツールの進化」とのこと。意気込みは、「YouTuberの大先輩であるカズチャンネルさんと一緒に参加できるということで、今回かなり気合が入ってます。リスナーのみなさんが、次はAdobe MAXに参加してみたい!と思えるようなレポートができるようがんばります!」

UI/UXデザイナーのこばかなさんは、Adobe MAXはもちろん海外のイベントは初参加。UI/UXを図解するイラストのTwitter投稿 #kobaka7_sketch が人気のクリエイター。お気入りの製品は、高校生の時から使っているPhotoshop。「日本のデザイナー陣が興味持っていただけるように頑張ります。」Adobe MAXも、こばかなさんの図解でわかりやすく解説してくれるはず!

「のらもじ発見プロジェクト」や、「INDUSTRIAL JP」などを手がけるクリエイターの下浜臨太郎さんが初参加。お気に入りの製品はIllustrator。Ai30周年記念のブログ連載「#Illustrator30_30」にも登場。「アドビがどんな会社なのか実は実態をぜんぜんつかめてないので、それも体感したい」と語ります。「アドビのクリエイティブツールがあるおかげで、今を生かされているといっても過言ではないので、常に感謝の気持ちを忘れずに参加しようと思います」。
“toki-“シリーズで知られるメディアアーティストの後藤映則さんがAdobe MAX初参加。アメリカに住んでいたときに言葉が通じない思いをしたことが、言葉がなくても伝わるアートを作るきっかけになったという後藤さんが、Adobe MAXでどんな体験をされるのでしょうか?お気に入りの製品は、「別にそんなに使えるわけではないのですが、なぜかAfter Effects好き」という。「世界最大のクリエイターの祭典ということで、どんなヤバい人と出会えるのか楽しみ」と語ります。#Illustrator30_30 にも登場。

#Illustrator30_30 にも登場したインタラクションデザイナーの中田拓馬さん。「過去バージョンも使えるCreative Cloudに今いちばん恩恵を受けている」という中田さん。世界中から集まるクリエイターとの交流会でいろいろな人と知り合って今後一緒にプロジェクトを進められるような仲間を見つけたいのだそう。「(海外生活で培った)語学力を活かして、多くの人とつながれるように楽しみたい!」とのこと。
ぜひ、みなさんのソーシャルアカウントをフォローして、MAX Insider独自の視点によるAdobe MAX速報をお楽しみください。アドビ公式Twitterアカウント@creativecloudjp @adobemax のフォローもお忘れなく!公式ハッシュタグは #AdobeMAX #AdobeMAXJP(日本語)です。また、世界中から招待されているMAX Insidersはコチラで紹介しています。

Airbus Helps Airline Clients Boost Bookings

Airbus is a worldwide leader in the aerospace sector, leading markets as a manufacturer of commercial aircraft, helicopters, space, and defense craft. With fierce competition in the transportation industry from both incumbents and newcomers, Airbus knew that it had to look for new ways to expand its global reach and communicate with its target audiences efficiently over any device.
As a massive, global company, Airbus’ digital efforts were previously siloed and fragmented. The company decided to undertake a digital transformation and deploy integrated Adobe Experience Cloud solutions.
With Adobe Experience Manager Managed Services, part of Adobe Marketing Cloud, Airbus brings together marketing assets and communication efforts from around the world under one umbrella. By creating a central communications hub powered by Adobe Experience Manager, Airbus is encouraging collaboration, enhancing branding consistency, and improving efficiency by allowing marketers and communications teams to share content globally across all digital channels.
Adobe Analytics, part of Adobe Analytics Cloud, provides measurements and data that enable Airbus to understand audiences and find success. By understanding what content engages which audiences, Airbus communication teams target more relevant messaging.
Airbus took a major leap into its new digital age with its first website aimed directly at passengers, not just commercial airlines. The Airbus A380 is a new standard in airline luxury. This beautifully designed two-deck aircraft, the largest commercial aircraft today, is filled with technologies that dampen noise, refresh the air, and help passengers fly in comfort.
Airbus encourages passengers to share their A380 experiences, become excited about A380 travel, and even find A380 flights where they can experience the plane for themselves. While the website is aimed at passengers, it helps drive traffic and bookings to commercial airlines, which in turn helps to increase profits and loyalty from Airbus customers.
“All that insight allows us to understand the final clients of our clients a little better,” says Jeremiah Bousquet, Digital Transformation Leader at Airbus.

4차산업혁명 시대, 고객경험이 생존이다

쇼핑을 즐기며 직접 조립도 해볼 수 있는 매장을 꾸민 이케아, 테마파크에 버금가는 다양한 재미를 선사하는 대규모 쇼핑몰. 모두 고객이 직접 체험해 보도록 하는 것이 얼마나 중요한지 일깨워 주는 비즈니스 마케팅의 단적인 사례들이다. 시장조사업체 이컨설턴시(Econsultancy)가 발표한 2017 디지털 트렌드에 따르면 전세계 1만4000명의 마케팅 및 디지털 전문가 중 20%가 올해 가장 획기적인 기회 요소로 ‘고객 경험의 최적화’를 꼽기도 했다.

4차 산업혁명의 도래로 산업뿐 아니라 생활 전반에 큰 변화가 올 것으로 예상된다. 지난해 다보스 포럼에서 주요 의제로 다뤄진 이후, 4차 산업혁명은 산업 전반과 정부 정책에 걸쳐 주요 담론의 중심에 서게 됐다. 이러한 흐름 속에 고객에게 어떠한 경험을, 어떤 방식으로 제공하는지가 기업의 경쟁력 확보 여부를 결정하는 주요 요소로 작용할 전망이다.
탁월한 고객 경험은 고객의 호감을 얻는 데 그치는 것이 아니라 실제로 비즈니스를 성장시키는 동력이 되기도 한다. 포레스터 리서치(Forrester Research)는 고객 경험의 선두 기업의 경우, 평균 17%의 매출 증가율을 기록한 데 비해, 그렇지 못한 기업의 매출 증가율은 3%에 그쳤다는 보고서를 내놓기도 했다. 경험 만족도가 높게 나온 기업의 고객은 지속적으로 해당 기업의 서비스를 이용하는 경향이 높을 뿐 아니라, 지인에게 해당 기업을 추천하는 비중도 높은 것으로 조사됐다.
smiling man typing text message
4차 산업혁명 시대에 기업에게 요구되는 고객 경험의 핵심은 개인화와 일관성이다. 철저하게 고객 맞춤형이어야 한다는 것과 이것이 지속적으로 이어져야 한다는 것이다. 모바일 퍼스트(Mobile first)를 넘어 모바일 온리(Mobile only) 시대라고 불릴 정도로 스마트폰과 태블릿PC 등 모바일 디바이스 사용이 많아지고 있다. 한 사람이 여러 개의 디바이스를 사용하면서 기업이 고객과 만나는 접점은 갈수록 다양해지고 있다. 이에 따라 고객과 만나는 모든 접점에서 개인적인 경험을 일관되게 제공하는 것이 경험 비즈니스의 중요한 요소가 된 것이다.
이러한 차별화된 고객 경험을 풍부하게 해주는 것이 바로 클라우드나 인공지능, 사물인터넷과 같은 4차 산업혁명에 기반한 기술이다. 고객에게 사물인터넷이나 비콘 등의 기술을 활용해 해당 고객이 매장을 방문할 때 필요한 프로모션 쿠폰을 전송할 수 있다. 여기에 요즘 주목 받고 있는 인공지능 비서도 큰 효과를 발휘한다. 호텔 체인의 경우, 인공지능 비서가 음성 데이터에 축적돼 있는 고객의 음성을 파악한다. 해당 고객이 투숙하면 마일리지를 확인해 라이브쇼와 같은 로열티 프로그램을 안내하거나, 객실 예약 시 포인트를 사용하도록 유도할 수 있다.
고객 경험과 4차 산업혁명의 접목으로는 영국 히드로 공항도 좋은 사례다. 매년 7500만명이 찾는 영국 히드로 공항은 고객의 여행정보와 공항 내에서의 와이파이 접속 정보, 상점에서 산 물품 정보 등을 취합해 여행자의 취향을 파악한다. 쇼핑 고객에게는 쿠폰을, 주차 고객에게는 안내사항 고지를, 환승 고객에게는 편의시설 소개 등을 하는 식이다. 고객은 웹은 물론, 모바일로도 이러한 서비스를 받을 수 있다. 덕분에 ‘유럽에서 가장 정신없는 공항’의 대명사였던 히드로 공항은 이제 스카이트랙스가 선정하는 세계 톱 10 공항에 매년 이름을 올리는 공항이 됐다.
4차 산업혁명은 전세계적으로 이미 진행 중이다. 하지만 국내 기업은 여전히 이에 대한 준비가 부족한 것이 현실이다. 최근 현대경제연구원의 조사에 따르면 국내 기업 10곳 중 7곳 이상이 4차 산업혁명에 대한 준비를 하지 못하고 있는 것으로 나타났다. 4차 산업혁명은 경험이 곧 브랜드의 차이를 결정짓는 ‘경험 비즈니스’ 시대가 도래한다는 것과 다르지 않다. 결국 탁월한 고객 경험을 제공할 수 있는 기업만이 생존하게 될 것이며, 이는 인공지능이나 클라우드, 사물인터넷과 같은 첨단 기술을 통한 디지털 혁신이 바탕이 되어야 한다. 4차 산업혁명 시대에 첨단 기술의 도입은 선택이 아니라 생존의 문제다.

Electronic Documents Aid in Disaster Recovery

I often talk about digital transformation as a way to help businesses preserve documents and streamline processes, but the customer experience is the driving force behind the transformation. And the customer experience goes beyond daily transactions – it extends to their personal life and the ability for the business to make the customer’s life easier in any way they can.
Each day, thousands of people are affected by some form of disaster. This time, disaster hit home for me and my family. Hurricane Harvey slammed Houston hard and it deeply affected my family and friends. From our family restaurant being in the line of destruction, to seeing friends and family who needed assistance from local aid organizations, it was an eye-opening experience. I realized just how big of a role digital transformation can have in improving lives and experiences.
Even in the wake of natural disasters, it’s important that people can return to some sense of normalcy quickly. Electronic documents, forms, and processes can enable organizations to help people do just that. And not only help people, but help people help people.
How Organizations Can Use Electronic Documents to Aid Efforts
Disaster relief is no small effort. It takes support from aid organizations on the national and local scale and requires thousands of man hours from volunteers as well as astounding amounts of tangible goods and cash. While many organizations are prepared to handle disaster recovery on a large scale, it’s still important that they continually streamline and adjust their processes to increase their support capacity and cut down on response time. Mobile technology, for example, is crucial to those efforts, and my colleague Dan Puterbaugh recently wrote about the important role it plays in a catastrophe.
In addition to mobile preparedness, electronic forms and documents are another, small way that national organizations are simplifying their processes so that they can serve more people faster. The Red Cross is an example of a relief organization using electronic forms and documents to efficiently facilitate their recovery efforts. Among the digital elements they’ve enabled to better serve their communities are electronic volunteer management tools. Electronic volunteer forms enable them to bring in volunteers quickly and without much overhead so that they can serve populations affected by disaster without the impediments of standard paperwork.
But it’s not just large organizations that benefit from electronic documents and digital processes. During the Hurricane Harvey relief efforts, one of the local organizations on the ground, the Texas Lighthouse Charity Team, was able to pursue their mission uninterrupted because of digital processes they put in place ahead of time. From taking online donations to establishing a digital network of vendors and volunteers, the team provided life-saving relief, including emergency rescues and food deliveries, to those in need and extended their reach by enabling others to use their network to serve in other ways.
Even organizations that are not normally associated with disaster recovery can help ease the process for those affected. Banks that utilize electronic forms and documents can conduct better outreach to their customers in affected areas and ease the burden of making payments or conducting business while they are recovering. Electronic forms can help people apply for payment deferral, apply for financial support, and many other tasks that help ease the burden of recovery.
Document Preservation – A Small, but Critical Part of Recovery
Businesses are in a uniquely vulnerable position when disaster strikes. On one hand they are concerned with their personal welfare, but on the other they have to focus on continuing or preserving their business operations. Documents play a key role in daily operations for businesses to run and by using electronic document services, they enable operations to continue with minimal delay. They also allow businesses to protect their most important documents from natural elements out of their control, such as flood waters from a hurricane.
However, as I pointed out previously, automating forms and documents isn’t just for large organizations. Small businesses and organizations can leverage their power as well. For example, by creating digital forms for donations and volunteer sign up, local non-profits can make it easier for people wanting to help to give both financially and with their time.
Small businesses are often the most affected by disasters and digital transformation can be a large component of getting their business back up and running. Tasks such as payroll are often overlooked in digital transformation, but if you’re in the middle of disaster recovery and are unable to get out paper checks, your business and your employees will suffer when they need help the most. Automating your business processes, such as payroll, can keep everyone on track when times are hardest.
Electronic Documents Serve Organizations Large and Small
Many organizations are already using these technologies to better serve their clients and benefactors – some on a massive scale. Though electronic documents are a small piece of the puzzle, they can be an important step toward bringing normalcy back to those in need.
The Global Fund, a not-for-profit that mobilizes nearly $4 billion annually to fight the diseases, is speeding up delivery of life-saving medicine and assistance with help from Adobe Sign, an Adobe Document Cloud solution. When budgets and plans need to be executed on quickly, having digital document processes such as concurrent signatures and preserved formatting (via PDF) are critical to delivering aid. Having a digital process also enables the organization to complete recruiting activities in less than a day, which helps to staff the fast-growing areas where they serve.
Digital transformation is one of the key components of enabling organizations and businesses to deliver their services without interruption and allow them to serve a greater number of people in less time. Both characteristics that are crucial during disaster recovery.

Adobe DITA World 2017 – Day 1 Summary by Danielle M. Villegas


Hello everyone! My name is Danielle M. Villegas, and I’m the “resident blogger” at Adobe DITA World 2017. In this blog post I will sum up the presentations of Day 1 of the conference.
There was a lot of informationon this first day of Adobe DITA World, but hopefully, I’ll be able to give you some of the highlights of each talk.
After Adobe TechComm Evangelist Stefan Gentz and Adobe Solutions Consulting Manager Dustin Vaughn opened up the virtual conference room, things started quickly. We were told that last year, +1,400 attendees signed up for the event. This year, Adobe DITA World got +2,500 registrations worldwide. That’s a lot of people attending!
The conference started off with a short welcome note from Adobe President and CEO, Shantanu Narayen. His main message was that our devices enable us to do so much more and in a personalized way, and we are the creators! He emphasized that this week, we’ll be hearing from experts who will help us to create, manage, and deliver world-class experiences for the best customer experiences. Adobe provides all the tools to make this happen!

In this post:

[Keynote] Scott Abel: “The Cognitive Era and the Future of Content”
Juhee Garg: “Technical content as part of your Marketing Strategy”
Philipp Baur: “The Triple C of Good DITA”
Ulrike Parson: “Bringing together what belongs together: DITA as the glue between content silos”
Tom Aldous: “Using DITAMAP / FrameMaker for non-DITA content”
Sarah O’Keefe: “Content – Is it really a business asset?”
Robert Anderson: “What Is DITA Open Toolkit, and What Should FrameMaker Authors Know About It?”

Keynote from Scott Abel: “The Cognitive Era and the Future of Content”

Scott Abel is the CEO of “The Content Wrangler” company, which is the official media partner of Adobe DITA World 2017. Scott is always a dynamic speaker!
The main focus of Scott’s talk was centered around how the future of technical communications will be about creating content that does things FOR our customers by producing on machine-ready content, as content is a business asset!
Scott started his talk talking about obesity and provided some stats about that. As someone who is watching his own health, he used the business of his nutritionist, Manuel, as an example to explain how Manuel needed to create better capabilities in his content. Manuel hired Scott after Manuel helped Scott reach one of his health goals (a satisfied customer!). Manuel needed to publish his content to multiple channels, but lacked some capabilities like personalized content. His content was created to be read by humans, but not computers. As a result, this prevented the automatic interchange between systems. This problem could be fixed through single-source publishing, adopting a unified content strategy, creating intelligent content, or even adopting DITA for topic-based content. However, it might not be enough to beat the competition. A differentiator was needed, but right now, Manuel’s not able to be scalable. Patients want exceptional experiences – we make them search for what they need. As content creators, we need to focus on how we deliver those exceptional experiences. Customers don’t want to learn your jargon or search for things; they don’t want to do the work that should’ve been already done for them to get to what they want.
This is where Scott cognitive computing comes into play. Cognitive computing involves self-learning systems that learn at scale and can make reason with purpose from the data. They interact with humans naturally with natural language processing. It’s a collection of different applications. Manuel could use cognitive computing to collect various preferences and habits, as well as family and other health history data, combine it with customer personal data and public data to create a personalized content experience for his customers.
What if he could connect his services to others offering similar services? Scott presented the idea that personal service managed using content management can yield an exceptional customer experience.
What if you could do the same thing? Scott suggested that it takes at least five steps to go in this direction:

You must have a willingness to explore, not always have ROI in mind,

You will need a disruptive mindset,

You will need intrapreneurial thinking – be a risk taker,

You will need top-level leadership support, and

You will need to have the resources, time and budget.

While cognitive content is the future, it’s not as close as we’d like to think. Depending on whom you ask, artificial intelligence (AI) is estimated to be used in full practice somewhere in the next 28-75 years from now! Cognitive content relies on AI, which was originally derived from science fiction ideas.
There are three main types of AI, as Scott explained:

Strong AI – This is AI like in the movie “Her,” where the AI had god-like intelligence

Full AI – This would be more generalized AI, set to perform intellectual tasks, like HAL in the movie, “2001: A Space Odyssey” performing a Turing Test.

Narrow AI – This is what we have now, also known as Weak AI. Example of weak AI would include digital assistants like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, or Google Home. These all require machine ready content, as they are mostly chatbots. We provide commands, and the chatbots provide answers within their programmed scope.

We’re stuck in the assistive past utilizing assistive solutions. We need to move towards acting on behalf of our users to help them achieve their goal, which means we need an agentive solution that works like personal valets. We are starting to move in that direction, but we’re not quite there yet. For example, a narrow AI agent would be Roomba or a Nest Thermostat, in which the AI in each of these devices learns your behavior. Information awareness plus machines doing the work equals an agentive action like Google alerts.
How do you decide between assistive versus agentive solutions?
Agentive solutions are delegated, measurable, have user focus, and have user input. Otherwise, it’s for assistance or automatic. They are vigilant and don’t need reminders or suffer from decreasing expertise. They are attentive to details, don’t look for workarounds, and are built for speed. Assistive solutions don’t employ these features.
Scott warned that the perceived dangers using AI are

“AI Washing,” which is basically marketing mumbo-jumbo,

AI will create autonomous weapons used to kill us, and

Robots will replace us.

Scott concluded that there are many types of niche content professionals that will be needed moving forward. Technical communicators are important in the content equation! He recommended that we can learn more with a book he recommends, Designing Agentive Technology by Christopher Noessel. He also invited us all to attend the conference he runs, Information Development World, which is set to be a great conference about preparing for chatbots and other cognitive computing, which takes place on November 28th – 30th, 2017.

Juhee Garg: “Technical content as part of your Marketing Strategy”

Juhee Garg works for Adobe regarding the XML Documentation Add-on for Adobe Experience Manager (AEM), Adobe’s enterprise-class DITA CCMS.
Juhee started talking out by talking about the digital evolution, whereby the behavior of buyers is changing because they can learn a lot of information just through a click of a button. Buyers are forming opinions based on digital searches now. Business buyers don’t contact suppliers until 57% of the purchase process is complete.
A typical buyer research process might be something where the user starts at the product website, then proceeds to investigate white papers, product manuals, how-to videos, user guides, and case studies, then looking at a competitive comparison before finally looking at an admin guide. Buyers are now looking between marketing content and technical content, as it is all product information. Boundaries are blurring between these kinds of content, yet the technical content is not usually part of marketing strategy because it’s considered a cost center and lacks IT support. A better alignment of these kinds of content is needed. However, it’s hard to do when ecosystems are creating different content. System integration is an IT nightmare. It can be hard to coordinate tech content with web CMS/Marketing content, difficult to keep templates in sync, keep content integrity, push updates; shared content can get duplicated, and it’s difficult to maintain multiple systems.
How do we break down the silos? We can bring the appropriate tools, and bring the two content creation groups together on a common platform and content model that could go out to the users. The advantages of this approach would be a unified content strategy, a consistent user experience, shared and reused content, resulting in effective content and communication.
The XML Documentation add-on for AEM is a tool that provides that link between authoring and collaboration. Authoring and collaboration using DITA content directly on AEM can be done, providing end-to-end content management capabilities and multi-channel publishing.
The benefits are that it produces blended publishing, and it allows you to inject technical communication content based on DITA directly into AEM through mixing Marketing and Technical Communication on one website.
Juhee gave us a demo of how this works in AEM directly. The add-on tool provides a WYSYWIG-friendly editor that allows someone who is not familiar with writing in DITA to write and edit in AEM in a DITA-friendly way. There is still a source view as well, so you can see all the XML tags and tweak as needed if you are a tech writer. All DITA features are supported by this editor. The publishing model is also very user-friendly, and easy to move elements around in the structure to change taxonomy as needed. DITA can be published as an AEM site. You can reuse templates from the marketing site if needed. It’s easy to publish, as you can publish content as a website, a PDF, HTML5, an EPub, and other advanced features. Pagewise PDF is a special output feature to create a PDF of each AEM page in the site. Much of the editing of a website in AEM is “drag and drop” of components/widgets, which looked very easy to do! Through the demo, Juhee was able to show how marketing and tech comm can align easily using these tools, and how it worked when it was published. The add-on can be specified on whichever version of DITA you are using, as well as DITA specialization. AEM integrates well with Adobe Marketing Cloud or Adobe Target so that you can see analytics as well. The new 2.5 release is expected next week showing these features and new ones as well!

Philipp Baur: “The Triple C of Good DITA”

Philipp Baur is from Congree Language Technologies, a 30-year-old company based in Germany which focuses on software and services for author assistance, serving about 90 customers. Congree Authoring Server software is an authoring server which checks spelling, grammar, style according to company standards, terminology according to the term database, abbreviation use, looks up similar sentences, looks up terminology information, and stores new content for everyone to use automatically and in real-time as you are using it. It is directly integrated into the editor you are using and can be used company-wide for consistency.
Philipp started his talk reviewing talking about topic-oriented documentation and DITA. He started with a definition of a topic, which he defined as

Independent information carrier

Contains enough information to be viable by itself

Answers a specific & unique question

Can be combined freely with other topics

Not created for specific documents but for the entire company

Why would we write this way?

Topics make content more manageable

Several authors write on the same document

Makes proofreading and translation more flexible

Saves money by reusing content

Modern devices require optimal space management

Single point of truth

Easy to apply thanks to standards like DITA

DITA offers a predefined structured for topics, and with the help of metadata topics, they can serve different target groups, products, and purposes.
The Triple Cs of good DITA was defined as cohesion, consistency, and coherence.

It’s the glue between two sentences.

It’s Necessary for the reader to link two sentences.

Examples would be words like and, so, yet, etc. or pronouns like this, some, or it.

Unnecessary or wrong use of cohesion undermines the purpose of topics.

Example: I like my cat. The cat would kill me if she could.
Change to: I like my cat. But she would kill me if she could.

Ensures that content has some sort of inner connection.

Avoids contradictions.

Avoids confusion.

Incomplete topics increase the risk of confusion.

Example: My cat is not for sale. Contact me if you want to buy my cat.


The invisible thread accompanying the reader through your documentation.

It can be split into language consistency, style consistency, and content consistency.

Language consistency – British vs. American English and spelling, etc.

Style consistency – how the user is addressed; tone of voice; use of passive voice; level of politeness; sentence complexity; use of modal verbs

Content consistency – identical sentences for identical ideas; using the same word for the same concept; violations are problematic for translation and for the reader

Hard to achieve

Inconsistencies throw off the reader, interrupt concentration and can lead to misunderstandings

How Congree can help

Users can use Congree in conjunction with FrameMaker.

Philipp gave a demo, which showed that Congree can display violations of what needs correcting in a FrameMaker document so that it can be fixed for consistency. You can click on each violation to make changes as needed, and it provides the style guide integrated into Congree

You can learn more about Congree on their website.
You can contact Philipp at pbaur@congree.com, or at info@congree.com. And check out the Congree channel on YouTube! If you are interested in seeing a personal demo to see if this is an appropriate product for you, email Philipp!

Ulrike Parson: “Bringing together what belongs together: DITA as the glue between content silos”

Ulrike Parson is also from Germany and owns Parson AG. She presented a case study based on work she did with a semiconductor company that showed how she and her colleague broke down the content silos of her client using DITA as the glue.
The challenges they faced:

They had to look at customer facing and developer technical documentation.

Documents were created by different groups.

Information was created in different life cycle phases.

There was a diversity of tools for authoring, content management, and publication.

Reuse across lifecycle phases and systems were done mostly by copy and paste.

There was a high effort for changing information and keeping it consistent.

It was impossible to estimate the consequences of change.

The goals set by the team were to do a lot of workshops and meetings to figure out definitions of measures, connect the content silos, make information consistent and reusable across systems, bring together information products and company groups, and make relations between artifacts from different domains visible. This would be done by creating a requirement, then providing a test case, then developing the code or device, which would yield the documentation.
The solution to connect silos involved defining the requirements and engineering domains. They found that semantic middleware was needed that had properties to connect all the groups to each other. These connectors were necessary to import objects and relations. The output decided was DITA for content, and metadata for better control of all the elements imported. Instead of using one tool for everything, this allowed teams to keep each tool’s purpose, and full features would only be required for its original purpose. It allowed for different updates and release cycles, as well as hard to change functioning workflows. It used the existing IT infrastructure and focused on reuse and consistency requirements.
Documentation 3.0 was essential to making this work. It included:

Inputs from original silos feed, made up of:

Requirements, systems, formats, exports, etc. , format export in DITA

Test Cases – original system, export format, test specifications produced in DITA

Source Code – original system, export tool, code comments, all in DITA

Technical parameters – define characteristics, parameters, features, values – DITA subject schema maps and DITAVAL

Documentation – DITA-XML based files, managed, subject scheme maps for variants, DITA keymaps for configurable data

It would all feed into the middleware, which would include:

A semantic model for intelligent information,

Some products like IBM Watson, or graft databases products,

Components that would act as the glue; the components were those used in products

Import-Export interfaces

Used established standards, DITA, RDF, Req-IF

Use established Interfaces REST

No interface? Use a standard exchange format. Start with one direction only

Reusable DITA modules

Use of DITA

Use centralized framework, templates and subject scheme maps

Use intelligent referencing mechanisms and configurable data as keys

If no CMS, find a way to trace use of modules

Consider how much of the semantics to transport to DITA

Documentation formed by combining generated and authored text. They found that single-sourcing of documents and the single-sourcing of variants for documents for publication on the website, internal use, and certifications were optimal. Documents could automatically be published on a build server, while monitoring generated DITA modules for changes in the original system on a company-wide DITA framework.
They had less success building a dashboard in the web portal. They found that it was not as successful as hoped. There were issues with traceability of modules from source to documentation, and problems with coverage analysis and metrics from relations such as between requirements and test cases. Despite creating a central access point all information for development projects, it was hard for workers to migrate as they were used to their old ways.
Lessons learned from this experience were the following:

Reuse of information must be based on a solid and scalable metadata model.

Use of standards makes your solution future proof.

DITA provides a good basis for intelligent content.

Creating integrated information for reuse requires a corporate effort.

Integrated information requires new processes.

Migration can be a huge effort.

“More authors” means “more training.”

Ulrike considered what it would take to work towards Information 4.0, but said, for now, it’s better to stick to Information 3.0 because:

Intelligent content is more than reusing information.

Intelligent content is modular, machine-readable content, enriched and delivered with metadata for enhanced usage.

DITA is the perfect basis for intelligent content as it supports modularization and metadata

Standards for metadata for technical communication are emerging (like iiRDS).

Technical communicators will become content and metadata curators.

You can contact Ulrike at ulrike.parson@parson-europe.com

Tom Aldous: “Using DITAMAP / FrameMaker for non-DITA content”

Thomas Aldous has been in the technical communications industry for 30 years, including stints at InTech, Adobe, Acrolinx, and now consulting as The Content Era.
The goal of this session was to provide solutions for those who had non-DITA XML content in a non-FrameMaker application, but would like to change authoring and publishing environments, those who were currently authoring non-DITA XML Structured content but would like to slowly migrate from current structured or unstructured content to DITA, and those would like to manage all content in DITA XML structure and publish to output like a complete website, HTML5, PDF, mobile app, or help.
Tom was going a little fast for me to keep up with him, but this is what I was able to glean:
A DITAMAP can let you organize topics that you want to publish. You can also generate navigation files based on the map structure and generate links that get added to the topics.
A map file references one or more of any XML file using <topicref> elements. The <topicref> elements can be nested to reflect the desired hierarchical relationship of the topics.
Why does it matter? FrameMaker supports DITA, including DITAMAPs, even if the content is structured in a none-DITA structure, and can be configured for most structures.
Tom called FrameMaker the “monkey wrench” of structured publishing, as it can handle just about anything related to DITA.
XML content comes in several “flavors”:

1 long file

1 small file map with pointers to file in the order they should be published. Entity reference do not normally have DTD callouts

1 small file map with a pointer to a file in the order they should be published. With Arbortext and others, use DTD callouts

DITA-DITAMAP and BOOK MAP have pointers to topic files in the order of publish

If starting with one long XML file (the example he used had over 6,000 lines in it), the long XML File could be converted into a DITAMAP, whereby it was cut up into chunks of content using some scripting, then mapped.
Tom noted that there are lots of examples of custom XML structures and other standards and that you don’t have to move completely to DITA, but you can also create an XSL stylesheet used to transform your current XML into the DITA structured.
Tom proceeded with a demo, which he started by opening the long XML file, which showed that you could bring in the DTD, name your application, create a template, read/write rules, namespace, and define doctypes, and also support entity locations.
By using an ExtendScript utility that The Content Era created that can chunk the files, he was able to create the DITAMAP as well. The XML view configures content in any way you want, showing that the ExtendScript will merge all the chunks seamlessly.
The way he did this within FrameMaker was to access from the top navigation Structure > Structured Application Designer. You would load up an existing application, then add all the details in the pop-up screen. Tom warned that rules are the most difficult and powerful, but it’s easily editable now in FrameMaker, as you can add the template, add doctypes, etc.
His advice was that you should understand your own domain content – make it intuitive, and create solutions for your content.
Tom likes complex challenges, so contact him if you are really stuck! He reminded us that XML is 16 years old now, so it’s a strong standard.
You can contact Tom through his company website, LinkedIn, or on Twitter. He’ll also be seen next at an Adobe event the day before the start at LavaCon 2017.

Sarah O’Keefe: “Content – Is it really a business asset?”

Sarah O’Keefe from Scriptorium Publishing contends that content is a business asset, especially if it’s good content. It means that people don’t return products or call customer service. Quoting Tim O’Reilly, “Technical information changes the world by spreading the knowledge of innovators.”
How is content an asset?

Meets regulatory requirements

Enables customer to use a product successfully

Provides reference information to prospective buyer doing research

Support brand message

When assets go wrong, it can be due to a number of reasons. They can include:

Product is recalled because of incorrect content

Frustrated customers return products – 25 % of returns are due to bad instructions

Prospective buyers don’t find what they need

It contradicts the branding

Information is out of date

Bad execution

The content is not appropriate for audience

How do you determine if your content an asset or liability? It needs to meet a hierarchy of content needs:
The minimum amount of viable content are the Available, Accurate, and Appropriate levels shown. If these aren’t met, then these are liabilities. Content that is Connected and Intelligent is an asset.
The customer journey now has to be looked at holistically. Content types are converging – we used to have a marketing funnel, but now we have a circular process. In marketing funnel, you matter until you buy, then you don’t matter. It’s the battle between pre-sale documents versus post-sales documents and persuasive information versus product information. In customer journey, we care about you in every step – you matter through the whole process. Convergence happens when using all the different documentation.
Sarah gave an example by telling a story about the disconnection between the website and the instructions included in the box of a product she bought. She emphasized that, unfortunately, you can’t control content use in the customer journey.
The Internet of things (IoT) and connected enterprise pulls in many of these concepts in which content is a huge asset. In the connected home, you can communicate with devices in your smart home by getting information and the device performing actions. The connected enterprise is the connected factory, such as industry 4.0, robotics, and automation, with concerns related to security.
IoT devices require intelligent content that islocation-aware, time aware, context-aware, system context-aware, and provide context-sensitive help. This can be achieved by improving search, such as the searchability (information is exposed to a search engine), findability (information shows up when people search for it, performs well with certain keywords, etc.), and discoverability (other people create links to your content, others recommend your content, reputation matters). Your reputation affects content distribution!
Digital business transformation occurs through good data hygiene. The ways to achieve this include:

No more back-formation of data

Single source of truth

Content is derived from data

Content is not data storage

For example, a product gets made, then technical publications capture information. Then product specifications change. But instead, corrections aren’t being made at the source. The document is now the source of truth, which is not an appropriate role for tech pubs. Content Management 1.0 needed namely traceability (where did content come from), content usable in various forms, distribution, and localization workflows (reduce reuse recycle). Localization is very important is this process.
Sarah concluded by saying that good content is an asset if you are following content trends by going beyond technical accuracies.
Sarah has written a white paper on the topic called, The Age of Accountability: Unifying Marketing and Technical Content with Adobe Experience Manager which you can access for more information. Technical documentation is all about scalability. Sarah concluded that content needs to be useful and consistent to the customer at an affordable rate.

Robert Anderson: “What Is DITA Open Toolkit, and What Should FrameMaker Authors Know About It?”

Robert D. Anderson from IBM has been working on DITA-OT almost since its inception.
What is DIT Open Toolkit?

Open Source software

It’s a program (technically a collection of programs) intended to read DITA and produce something else

It’s not part of DITA, but it’s there to make your DITA do something

DITA-OT is software that turns your stuff into something else (that’s not usually DITA)

It’s an implementation of DITA

Originally a developer works project at IBM

DITA-OT became open source when DITA became an open standard

Without tools who would use DITA? If it’s not a shared standard, who would want DITA- OT?

DITA-OT was created to help all DITA users off the ground more easily, including authors and vendors trying to support DITA.

DITA-OT core features:

Key resolution

Content references

Link and metadata management


Branch filtering, and more

It also includes pre-processing steps like merging DITAVAL conditions, merging maps, retrieving link text, evaluating @copyto, adding ditaval flags, and more

How do I pre-process? You don’t – usually, it’s for those who want to super-customize things

From core to publish:

Ships formats out of the box: HTML5 PDF, XHTML, Eclipse Help, CHM, Troff and a few others (RTF, ODT, Java Help). Some are add-ons that are not maintained anymore

Plugins available for other formats

Styles are generic and meant for customization. Check out Jarno’s PDF generator to create a custom PDF.
More exciting stuff that DITA-OT can do:

Add preprocessing steps

Add or modify generated text

Custom HTML5 navigation

Switch or extend CSS

Use XSLT to override styles

Create entirely new output formats

Extensions usually stored in a plugin as with PDF plugin generator

FrameMaker does not use DITA-OT, as it can publish in PDF, which is not DITA-OT. The more complicated you get, more than you use the toolkit.
Should you care about toolkit updates?

If you’ve decided to use an open standard, you or your tools or any partners using DITA OT, or want the benefit of common, shared Open Source, then yes! Update!

When working with business partners who use custom HTML5 framework, or use an elaborate PDF style w/custom plugins, need to publish as Morse Cole, or as XML input into an automated system, then you need DITA OT

Updates are like:

Common preprocess fixes

Changes to how final rendered content is generated for all

Who governs DITA-OT?

Active participants – anybody can participate, the more you participate, the more influence you have

Most are from language, communication, and comp science backgrounds

With great open source, comes great responsibility.

Most are volunteers or report to their own managers

If anyone CAN fix a bug or add a feature … then sometimes you have to add it on your own

Useful skills to have to use DITA-OT:

The best way to suggest changes?

GitHub pull request

GitHub issue tracker

Attend contributor calls

Ask your DITA vendor

Resources that Robert provided:

Day 1 Conclusions
The day concluded with Stefan and Dustin thanking today’s presenters, and inviting everyone to return for tomorrow’s presentations.
See you tomorrow on Day 2 of Adobe DITA World 2017!