Content as Conversation: Creating More User Friendly Content

Creative Cloud


I once worked with a non-profit healthcare client who was creating an informational website to help people make medical decisions. One of the challenges that became clear early on was the need to support the client in making the website content user friendly and accessible.

Medicine and public health are very evidence based domains. People working in this space have a clear concern about delivering accurate and statistically sound information, and rightly so. A particular challenge around this is making the content feel relatable to a general public audience. When making decisions about their health, people need to connect this evidence based medical data back to the impact it has in their particular circumstance.

How might we support our clients and stakeholders in developing content that is compelling and user friendly, and that feels like a two way conversation rather than a one-way pushing of information? Throughout our research, we discovered that our user group was used to talking to people about these types of decisions – for example their doctor, or a trusted friend or family member. This was a key insight which led us to reconsidering the design of information, and making the website feel like a conversation.

Tips and tricks

Content as conversation can be a great guiding principle in cases where user engagement is key, or where you need people to have a self guided journey in discovering a topic. Especially in the case of health information, making content conversational rather than dictatorial is really important; it can be very off-putting to be told what to do or feel like you are being lectured to. So how can you bring a conversational tone to your content strategy or the content development work you are doing? Here are some tips based on the project learnings…

Role play the conversation

A really powerful way to get client stakeholders and subject matter experts to think about content in a new way is to simply role play the conversation they want to have with users.

In this case, a team member played the user coming to the site looking for information to support their decision making about a medical procedure. Our subject matter expert chatted with the ‘user’, communicating the important information. A UX team member jotted down the flow of the conversation.


We used this exercise to note some characteristics of how our subject matter expert delivered the medical information:

  • Flow and branching – the conversation starting in one place and growing from there.
  • The human element – reassurance, plain language and anecdotes were woven into the serious health topic.
  • Conversation is two-way – the user decides where they would like to start, and explored the topics they were concerned about or felt were relevant with support from the subject matter expert.

When we incorporated these characteristics into the design principles for the content, we came up with some interesting solutions. Instead of a linear, statistically heavy quiz format, we opted for a ‘choose-your-own-adventure’ style user flow which allowed users to drive the order of the content based on what they wanted to start with.

Speak your users’ language

Our upfront user research had made it was clear to the team that there was a big discrepancy between the drafted written content and the way people talked about the topic.  Building on the role-playing exercise, we infused the principles of colloquial language, anecdotes and reassurance into the content creation.

We did this by using the words and phrases we had heard during the research phase, and we found ways to include people’s stories and anecdotes. When creating content of any kind, it is important to understand how your users think about and speak about a topic, so that the language reflects their mental model. We found that our users responded to words and phrases that matched their way of talking, and were not clinical or medical jargon. When specialised medical or subject matter language is used, relatable language helps to explain the word in context. Plain language is a powerful UX tool.

Using a reading level checker is also a crucial way to assess content. Word processors such as Microsoft Word often have a readability assessment tool, or you can use an online option. Inputting text and running the checker results in a readability score. As a rule of thumb, anything with a general public audience should aim for a reading level of between grade 6 and 8.

Test, test, test

When designing anything the proof is in the user testing pudding! Finding ways to test the content and flow of information with people was crucial for this project. The content we developed as a team went through several rounds of user testing. In many cases this took place in the context of a usability test.

Asking open ended questions while doing a usability test can be a good way to gauge the content’s success. For example, looking at a page and asking the participant ‘what does this mean to you’ or ‘what is this telling you’ can give a good sense of what is resonating.

If you are working with a live platform, A/B testing is an approach that can identify which content is better, for example based on number of sign ups, or purchases. This works best where an explicit call to action is in play.

A great approach is to print out the content you are testing, and ask people to highlight words that are unclear. The UK Government Digital Service suggests getting users to underline pieces that build confidence in green, and pieces that make them less sure in red. In aggregate, you can get a sense of what is and isn’t working.

What conversation would your users like to be having with you?

Approaching content as a conversation is a great way to keep the user in mind as you are developing your product. Thinking in human terms about the tone, pace and language of how the information is getting delivered opens up new possibilities.

Adobe Premiere Pro and Media Encoder 2015.3 (10.4) Updates

Creative Cloud


Today we are releasing updates to Adobe Premiere Pro and Adobe Media Encoder, versions 2015.3 (10.4), accompanied by new releases of Adobe Audition and After Effects. Creative Cloud members and trial users will be able to download and install these applications today using the Creative Cloud desktop application, or online from
The new releases contain extensive fixes to known issues and are recommended for all users. Please note that a Premiere Pro project file update is NOT required when updating to these versions.  A list of the key fixes and improvements for each application appears below, followed by a list of fixes from the previous versions (10.3), which are also included in the new update.

 Adobe Premiere Pro 2015.3 (10.4) Notable Fixes

  • Proxy media support when Dynamic Linking to After Effects has been improved
  • Several new QuickTime formats are now natively supported on Windows without the QuickTime player needing to be installed (Import: AAC, PNG, Animation codec; Export: Animation codec)
  • The HDR Specular slider in the Lumetri Panel now functions correctly
  • Frame accuracy has been improved when flattening a Multi-Camera Source Sequence containing speed changes
  • Certain issues encountered when transcoding image sequences on ingest have been resolved
  • The stability of FCP XML export has been improved
  • The stability of Multi-Camera Source Sequence creation has been improved.
  • Certain issues with incorrect numbers of audio channels when creating proxy files have been fixed
  • Performance issues with H264 media on Windows when ‘Enable Accelerated H264 Decoding’ was enabled have been fixed
  • Fixed an issue where a noise burst could occur when playing back at 2x with ‘Maintain pitch while shuttling’ enabled
  • Some issues with dragging Multi-Camera Source Sequences to a sequence or to the Project Panel have been resolved
  • .stl files now conform to EBU 3264
  • The Link Media dialog no longer has certain file types disabled
  • An issue with color control surfaces not correctly applying master track controls has been resolved
  • Fixed an issue with waveforms offsetting when flattening Multi-Camere Source Sequences
  • Fixed a rendering issue on scaled clips with OpenCL or CUDA
  • An issue with opacity handles from previous versions not being accessible has been fixed
  • Fixed an issue with black frames for RED files with a certain NVIDIA driver installed (368.39)
  • An issue where audio did not play back after enabling ‘Maintain Audio Pitch’ with a speed change in place has been fixed
  • Fixed an issue where trimming in the time line could disable Overlays from appearing

Adobe Premiere Pro 2015.3 (10.3) Notable Fixes

  • Fixed issues where QT32 Server asserts or ‘Error Compiling Movie’ errors were seen when exporting to QuickTime ProRes files
  • Fixed application being unresponsive when opening a 9.2.1 project in 10.x
  • Fixed certain proxy generation issues when files were open in the Source Monitor from the Media Browser
  • Fixed red frame issues when playing back Sony XAVC Lon-G files
  • Improved audio rendering times when using multi-cam clips and adaptive audio tracks
  • Fixed frame drop issues with Cineform media on 12-core 2013 Mac Pro systems
  • Fixed performance issues when duplicate frame indicators were enabled
  • Fixed audio distortion when using USB microphones with Transmit enabled
  • Fixed playback issues on Windows when using AMD R9 cards
  • Resolved certain proxy linking issues when off-lining media
  • Improved 4K 24p H264 playback on Microsoft Surface Book
  • Fixed an issue with 608 captions dropping characters in 29.97fps
  • Fixed a thumbnail flickering issue in large projects on Mac
  • Fixed a certain crash issue when opening legacy projects
  • Non-watermarked Adobe Stock videos now correctly replace watermarked samples after licensing
  • Improved Premiere Pro performance when AME is generating proxies
  • Fixed an audio buzz issue when playing or scrubbing audio crossfades on clips with clip pan
  • Fixed a hang issue when lasso-selecting edit points
  • Fixed an exception error which could occur right after recording multicam edits
  • Fixed black program monitor issue with RED media with OpenCL and Mac OS X 10.10.5
  • Fixed OpenCL playback crash on Surface Pro 4
  • Match Source issues of incorrect matching of certain formats have been resolved
  • Issues of audio dropout when some tracks were muted have been resolved
  • Fixed an issue where growing files imported from EVS weren’t actually growing
  • Fixed an issue where peak files were constantly being regenerated on project open
  • Fixed key-frame inaccuracies in FCP XML export
  • Fixed an issue where auto-saving did not occur due to project changes not being recognized
  • Improved import performance of AAF from a directory containing several OPAtom files
  • Fixed Match Frame command for subclips
  • Fixed a crash issue when exporting to XDCAM HD 422 50mb MXF media with smart rendering enabled
  • Fixed issue where closed captioning files were being unnecessarily re-scanned on Windows
  • Transmit offset value now correctly disallowed when scrubbing audio
  • Fixed an issue where Sony RAW S-Log 2 media could appear transparent
  • Fixed an issue where the Work Area Bar setting did not stick
  • Fixed an assert issue when creating new Closed Captions on Mac
  • Fixed C300 media import issues
  • Resolved an issue where Merged Clip volume changes were not showing in Clip Mixer
  • Fixed incorrect aspect ratio in IMX render
  • Aspect ratio inaccuracies with certain 1920×1080 media have been fixed

Adobe Media Encoder 2015.3 (10.4) Notable Fixes

  • Fixed several issues encountered when ‘Premiere Pro > Import sequences natively’ was checked in the preferences
  • Several issues with ingest were resolved
  • Stability of the Media Browser has been improved
  • XAVC HD Intra Class 200 presets were replaced with Class 100 presets for improved compatibility with more devices
  • Issues where Destination Publishing could fail with due to a time-out have been resolved
  • Certain issues with Audio Hardware preferences have been resolved
  • Fixed issues where Lumetri looks were being incorrectly rendered, and ProRes exports could appear over-exposed

Adobe Media Encoder 2015.3 (10.3) Notable Fixes

  • Significant improvements to error reporting
  • Fixed an issue where audio could repeat at the end of an encode or a silent section within a Premiere Pro sequence
  • Improved 3rd-party player support for MPEG2 exports
  • Several issues with muxing of audio and video files at the end of an export process have been resolved
  • Smart rendering performance has been improved
  • Resolved some issues where AME would keep a lock on recently encoded files such that users could not move them
  • Fixed an issue when exporting long files/sequences to MXF presets could cause an ‘error compiling movie’ error
  • Resolved several performance issues when the GPU was enabled

After Effects CC 2015.3 (13.8.1) Bug-Fix Update Is Now Available

Creative Cloud


The After Effects CC 2015.3 (13.8.1) bug-fix update is now available. This update fixes multiple bugs, including audio looping at the end of compositions when exported via Adobe Media Encoder.

This update also introduces native import and export support for the QuickTime Animation codec, and native import support for the PNG video codec and AAC audio codec inside of QuickTime files. This means you no longer need QuickTime installed on Windows in order to use QuickTime files with the Animation, PNG, or AAC codecs, part of our ongoing effort to provide native support for as many codecs as possible. These codecs are also natively supported in the updates to Premiere Pro CC 2015.4 (10.4), Adobe Media Encoder CC 2015.4 (10.4), Audition CC 2015.2 (9.2.1), and Prelude CC 2015.4 (5.0.1).

You can install the update through the Creative Cloud desktop application, or you can check for new updates from within any Adobe application by choosing Help > Updates. Please note that it can take 24 hours or more for all of our global data centers to receive the update; if the update isn’t available for you right now, please check back later.

For details of what was added, changed, and fixed in After Effects CC 2015.3 (13.8), see this page. For details of all of the other updates for Adobe professional video and audio applications, see this page.

Please, if you want to ask questions about this update, come on over to the After Effects user-to-user forum, rather than leaving comments on this blog post. (It’s much harder to have conversations in the comments of a blog post.) If you’d like to submit feature requests or bug reports, you can do so here.

bug fixes in After Effects CC 2015.3 (13.8.1)

The After Effects CC 2015 (13.8.1) bug-fix update addresses these bugs:

  • After Effects no longer crashes (kernel panic) macOS 10.11 when started on certain Mac hardware with Nvidia GPU’s. A result of this change is that Mercury GPU Acceleration using Metal is not available for Mac hardware with Nvidia GPU’s. Mercury GPU Acceleration using OpenCL remains available.
  • H.264 and AVC footage no longer decode with random red frames on Windows computers with certain Intel HD graphics configurations. If you experienced this problem, red frames can still appear for previously cached footage until you clear the caches: click both the Empty Disk Cache and Clean Database & Cache buttons in Preferences > Media & Disk Cache.
  • Audio no longer unexpectedly loops incorrect samples towards the end of an After Effects composition, when exported via Adobe Media Encoder or dynamically linked to Premiere Pro.
  • Audio-only previews no longer fail to start if Cache Before Playback is enabled.
  • Audio-only previews now loop the preview-time indicator (PTI) to the start of the preview range as expected, in sync with audio playback.
  • Audio no longer unexpectedly loops during previews with the Skip Frames option enabled, if Cache Before Playback is enabled and you interrupt caching before the full range is cached.
  • Text template compositions in Premiere Pro no longer render some frames incorrectly.
  • The Info panel no longer opens unexpectedly if closed when, for example, you move the mouse pointer over the Composition panel.
  • Guides and grids draw at the expected width on Apple Retina displays.
  • Math equations are again evaluated when exiting numerical input fields in dialog boxes, or when OK’ing the dialog.
  • Adobe Bridge CC starts as expected when you choose File > Browse in Bridge, File > Reveal in Bridge, or Animation > Browse Presets.
  • Text, shape, or Illustrator layers no longer draw with unexpected thin horizontal lines.
  • Anti-aliasing has been corrected on text and shape layers.
  • Lights with a cone angle larger than 175° no longer cause After Effects to crash on Windows.
  • Stopping the render queue no longer experiences a long delay in certain situations before the render process stops on macOS.
  • Exporting a composition no longer has a small probability in certain scenarios of replacing random frames with the wrong frame from the cache.
  • Resizing a viewer panel is less likely to cause incorrect pixels to be drawn in the expanded or reduced regions of the panel.
  • Proxies are now recognized when sending footage to After Effects from Premiere Pro.
  • After Effects no longer crashes if you quit while Premiere Pro was rendering a dynamically linked composition.
  • After Effects no longer crashes when you set a color management profile for an output module.
  • Fixed a cause of crashes that occurred when the viewer panel was redrawn in certain cases, such as when panning with the Hand tool.
  • Fixed a cause of crashes that occurred when expressions were enabled in the project.
  • Fixed a cause of crashes that occurred when After Effects was quit.
  • Fixed a cause of crashes that occurred when After Effects was running in headless mode and quit.
  • Fixed a cause of crashes that occurred when After Effects decodes an incorrectly encoded alpha channel in Apple ProRes 4444 files written by certain third party encoders.

about native import and export of Animation codec QuickTime files

After Effects CC 2015.3 (13.8.1), Premiere Pro CC 2015.4 (10.4), Adobe Media Encoder CC 2015.4 (10.4), Audition CC 2015.2 (9.2.1), and Prelude CC 2015.4 (5.0.1) can now import and export QuickTime files that use the Animation codec without assistance from QuickTime 7 on Windows, or from the Adobe QT32 Server process on macOS.

Native import of Animation currently only supports I-frame only movies (no compression key frames set).

Native export of Animation currently only supports I-frame only movies (no compression key frames set) at 100% quality.

Native export of Animation only supports the Uncompressed audio codec. Attempting to export a QuickTime file with the Animation codec and compressed audio codecs will fail with an error message, “Compressor format error”.

about native import of PNG codec QuickTime files

After Effects CC 2015.3 (13.8.1), Premiere Pro CC 2015.4 (10.4), and Adobe Media Encoder CC 2015.4 (10.4) can now import QuickTime files that use the PNG codec, without assistance from QuickTime 7 on Windows, or the Adobe QT32 Server process on macOS.

about native import of AAC audio codec QuickTime files

After Effects CC 2015.3 (13.8.1), Premiere Pro CC 2015.4 (10.4), Adobe Media Encoder CC 2015.4 (10.4), Audition CC 2015.2 (9.2.1), and Prelude CC 2015.4 (5.0.1) can now import QuickTime files that use the AAC audio codec without assistance from QuickTime 7 on Windows, or from the Adobe QT32 Server process on macOS.

Native import of AAC in container formats other than QuickTime was added in a previous release.

Native import of AAC does not support AAC LD (low delay) or AAC ELD (enhanced low delay).

known issues in After Effects CC 2015.3 (13.8.1)

The following may affect your experience using the new functionality in After Effects CC 2015.3 (13.8.1):

  • Exporting a QuickTime file with the Animation codec will fail if the audio codec is not set to Uncompressed, with an error message, “Compressor format error”. To work around this, set the audio codec to Uncompressed.
  • Compressed audio codecs other than AAC in a QuickTime file are not natively supported. On a Windows computer without QuickTime 7 installed, importing a QuickTime file with compressed audio codecs other than AAC (e.g., Apple Lossless, IMA 4:1, etc.) may fail to import (error “file is damaged or unsupported”) or fail to play back correctly.
  • Some of the bugs fixed in After Effects CC 2015.3 (13.8.1) may not appear to be fixed for your existing compositions or footage until previously cached frames are cleared from disk caches. Specifically, this applies to the bug fixes for red frames in H.264/AVC files, audio looped at the end of a composition in Adobe Media Encoder or Premiere Pro, and wrong frames rendered by text template compositions. You may need to do the following (try these steps in order):
    • In After Effects, in Preferences > Media & Disk Cache, click Clean Database & Cache, under Conformed Media Cache.
    • In After Effects, in Preferences > Media & Disk Cache, click Empty Disk Cache, under Disk Cache.
    • In Premiere Pro, choose Sequence > Delete Render Files, or Delete Render Files In to Out, or Delete Work Area Render Files.
    • Manually delete the files in both the Media Cache and Media Cache File folders:
      • macOS: /Users/user name/Application Support/Adobe/Common/Media Cache and /Media Cache Files
      • Windows: C:Usersuser nameAppDataRoamingAdobeCommonMedia Cache and Media Cache Files
  • Apple’s Metal technology is not available for Mercury GPU Acceleration in After Effects CC 2015.3 (13.8.1) on Mac computers with an Nvidia GPU. We are investigating with Nvidia the cause of system crashes due to a kernel panic with some combinations of Mac computers and Nvidia GPU, and plan to re-enable Metal support in a future release. Mercury GPU Acceleration using OpenCL remains available on Mac computers with Nvidia GPU’s.

Audition CC 2015 Patch Update Now Available

Creative Cloud


Happy to announce an update for Audition CC 2015 went live today.  This patch update (internal version 9.2.1) addresses several bugs discovered too late to fix or after the release of the official 9.2 update.  These fixes include:

  • On-Clip UI status of grouped and suspend-grouped clips should be correct now
  • Audition may crash when moving a set of clips to new tracks during playback
  • Effects Rack presets properly imported to new version
  • Cases where Multitrack clips are not audible during playback
  • Improvements to ProRes Quicktime video file import
  • The Mixer gain settings may not visually match the Track Panel setting
  • Muted clips on non-muted track may become un-muted
  • Better handling for external media disconnection when using the Media Browser
  • and more.

Download the latest update from your Adobe Creative Cloud desktop app, or by visiting

See what’s new in the latest update to Premiere Pro, Media Encoder, and After Effects CC.

More Than Just Pretty: How Imagery Drives UX

Creative Cloud


As the saying goes, a picture is worth a thousand words. Human beings are highly visual creatures able to process visual information almost instantly — 90 percent of all information that we perceive and that gets transmitted to our brains is visual. Images can be a powerful way to capture users’ attention and differentiate your product. A single image can convey more to the observer than an elaborate block of text. Furthermore, images can cross language barriers in a way text simply can’t.

More than just decoration, images have the power to make or break a user experience. The following principles and best practices can help you successfully integrate imagery into your design and bring your app or site to life.

Only Use Relevant Images

Compelling images have a unique ability to inspire and engage your audience. But not all images improve the experience, some of them just take up space or in the worst case, confuse the user. One of the most dangerous elements in any design is imagery that conveys the wrong message.


Images that aren’t related to the topic often cause confusion. Image credit: hubpages

Users react to visuals faster than text, so make sure your content matches the supporting visuals. You should select images that have a strong relationship with your product goals and ensure that they are context-relevant.

Images Shouldn’t Create a Visual Noise

It’s unlikely the purpose of your site or app isn’t to showcase images. Instead, the images you choose should showcase the purpose of your product. Use a limited number of striking visuals in your design, the ones that really capture a user’s attention. For instance, Apple’s homepage features just one large image as the focal point of the screen.


Apple’s homepage features a huge image of the latest product.

This page works because the image shows the newest products in details and provides useful information for users.

Use High-Quality Assets Without Distortion

Make sure your images are appropriately sized for displays across all platforms. Images shouldn’t appear pixelated, so be sure to test appropriate resolution sizes for specific ratios and devices. Display photos and graphics in their original aspect ratio, and don’t scale them greater than 100%. You don’t want the artwork or graphics in your product to look skewed or too small/large.


Left: Degraded imagery. Right: Correct resolution. Image credit: Adobe Photoshop

Image-Focussed Design Isn’t For Every Page

Getting people’s attention with aesthetically pleasing images certainly has value, but it comes at the price of making other elements harder to see and use.

Putting too much focus on images in your design might create a visual overkill that can seriously distract users from meaningful engagement with your content. You can see this effect in SoundCloud’s app where the image takes all the attention and you barely notice two buttons.


In Soundcloud’s home view screen buttons are overshadowed by the image.

Although image-focused design is appropriate in some cases (e.g. Apple’s homepage example above), the vast majority of apps and sites should follow a balanced approach — images that are used in user interface should support the product, but not obscure other important content or overshadow functionality.

Use Multiple Mediums

Both illustration and photography can be used within the same product. Photography is ideal to showcase specific entities and stories. For example, if we need to show not just any flowers, but roses specifically, a photo is ideal:


For specific entities, look first to photographic representation. Image credit: Adobe Stock

Illustration is effective for representing concepts and metaphors, where photography might be alienating. In the case of showing a flower, an illustration works well:


Illustration conveys an approximation of content to aid comprehension when total specificity isn’t required. Image credit: Adobe Stock


Have a Point of Focus

Imagery is a visual communication tool that conveys your message. A clear focus communicates the concept at a glance, whereas a lack of focus makes the image meaningless.

When the point of focus is obscured, the iconic quality of the image is lost:


Don’t. A lack of focus makes the image meaningless. Image credit: Adobe Stock

The most powerful iconic images consist of a few meaningful elements, with minimal distractions:


Do. A clear focus communicates the concept at a glance. Image credit: Adobe Stock

Thus, avoid making the user hunt for the meaning in the image. Chances are, they are only giving your content a glance.

Show Real People

Human images are a very effective way to get your users engaged. When we see faces of other humans it makes us feel like we are actually connecting with them, and not just use a product. However, many corporate sites are notorious for the over-use of insincere photography which is employed to “build  trust.”


Don’t. Inauthentic images leaves the user with a sense of shallow fakery. Image credit: Material Design

Usability tests show that purely decorative photos rarely add value to the design and more often harm than improve the user experience. Users usually overlook such images and might even get frustrated by them.

A very simple rule of thumb is to use high quality photographs of people, who match your app’s or site’s character. Imagery you use should be an authentic representation of your product, company or culture.


Do. Show real, down-to earth people people and make sure that they really match your product’s characters. Image credit: Adobe Stock

Trigger User Emotions

Apps and sites are built to perform a function. But beyond this, they should be engaging, exciting, even fun to use. Adding delight to the product is a great way to make it not only more human but also make it unique, because a good overall impression isn’t just about usability, it’s also about personality.

If you already have a satisfying customer experience, adding delight to your product helps create an emotional connection with your users.


Snapchat knows that even a zero-data state screen with a proper image can be fun and delightful.


Thinking about images in terms of their usability is important. All visual communication in your design leaves a cumulative impression on the user.

Compelling images have a unique ability to inspire and engage your audience to provide useful information. So take the time to make every image in your app or site reinforce user experience.

Designing Multi-Disciplinary User Experiences That Take Users Off the Screen

Creative Cloud

Soaring 883 feet above street level, the One Liberty Observation Deck offers 360-degree views of Philadelphia’s Center City and far beyond. After a quick, narrated elevator ride to the 57th floor, visitors encounter a super-sized Benjamin Franklin sculpture, Philadelphia-focused listening stations and multi-lingual touch screens that enable them to zoom in on a single point hundreds of feet below.

Credit: Photo by M. Fischetti for VISIT PHILADELPHIA

Next time you’re in Philadelphia, look up—way up—to see the One Liberty Place Observation Deck. The deck, located some 945 feet above ground, features a 360-degree interactive panorama of the entire city. With six touchscreens and nearly 100 interactive points of interest, the digital installation helps visitors get up close and personal with the city through stories, stunning gigapixel-class photography and unique observation.

It is just one of many interactive user experiences developed in collaboration with Stimulant, a San Francisco-based company that creates place-based digital interactions. This particular experience also involved the deck’s operators Montparnasse56 and photography partner Panogs.

“What we try to do is bring social experiences back into the physical world using technology as a conduit to connect people with one another in the same space in the same time,” said Nathan Moody, Design Director at Stimulant.

It is the merging of architecture, user interface design and experience design that sits at the intersection of reality and possibility. These types of experiences explore the possibility of making physical properties interactive and, in some cases, almost human. It is a whole new way to view the user experience, but it still very much rooted in user experience design.

“Make no mistake, every one who works here cut their teeth on the web,” Moody said. “What’s different is bridging the gaps between disciplines and deliverables.”

Moving experiences off the screen in an increasingly digital and connected world is inevitable. Stimulant’s clients range from museums and iconic national landmarks to cutting-edge and innovative companies, each seeking to create more meaningful and interactive experiences for their customers. As more people become aware of the potential to make physical spaces as rich as virtual spaces in terms of context and interaction, physical spaces will become just as much an actor in an experience as the very people who are experiencing it.

The Disappearing Computer

Exploration into the realm of smart physical spaces and user experiences isn’t new. The Disappearing Computer Initiative was a European government-funded program that launched in 2001 and was active for just a few years. Its goal was to see how incorporating information technology into everyday objects and settings would support and enhance people’s lives.

The work was based largely on three objectives that are perhaps more relevant to user experience designers today than ever before:

  • Create “information artefacts” by embedding technology into everyday objects
  • Explore how “collections” of these “artefacts” can work together to produce “new behavior and new functionality”
  • Ensure people’s experiences with “these new environments is coherent and engaging”

These early projects mimicked much of what we’ve seen developed in recent years. One project looked at embedding location-based technology into grocery store items like cereal boxes to help people navigate stores and find groceries faster. Another explored using the physics of sound control to enable appliances to communicate with humans. And this was just the beginning.

No Rules, But More Expectations

Fifteen years and countless technological advances later, designers are still in the early stages of exploring what is possible when the permanent and digital worlds collide.

“There are no best practices about doing any of this yet. Our responsibility is to educate our partners and get them excited about the possibilities versus the challenges,” Moody said.

Unlike virtual user experiences, building interactive user experiences in the physical world requires bringing outside elements into the design experience. Moody is constantly educating partners and clients around interactivity as a building material, the properties of certain construction materials, natural and traditional lighting sources, and how the physical and virtual worlds can work together to make an idea come to life.

Despite this, clients these days are often well informed and their expectations and aspirations have grown exponentially. They ask about Bluetooth low energy beacons, employ people internally to stay abreast of what’s cutting-edge, and bring examples of interactive experiences they’ve found on YouTube to help communicate their visions. Just a few years ago, these same clients were referencing Minority Report.

“They no longer need the tropes of science fiction to view what’s possible,” Moody said.

Challenges In Designing Multi-Disciplinary User Experiences

It’s not just the companies behind these experiences that are asking for more; users want more too.

“There’s been a 180 degree shift in terms of user expectation over the past eight years,” Moody said. “When we first started, you had to make it explicit for users to touch things. Touch experiences became commoditized overnight. Now everyone is going to be creating multi-touch experiences.”

Users want—and expect—experiences beyond touch, and companies are keen to deliver.
However, there is still an overall problem of awareness. Creating these types of experiences requires development and quality assurance testing that can get very complex. Various levels of augmented reality are often layered on top of an experience, making these massive and expensive experiences to develop. On top of this, these different technological components need to work together to sense when users are present or engaged in order to deliver the intended experience, something Moody calls “the socialization of technology.”

Since many of these large-scale activations occur in public or touristy places, privacy, accessibility and language barriers also pose challenges. At the One World Observatory in New York City, Stimulant addressed one of these problems by incorporating data into a visitor’s ticket. When a ticket is scanned upon entry, the visitor is welcomed to the experience in their native language and they receive insight into other visitors, able to see common places where people have travelled from around the world.

“Digital experiences in physical spaces are made or broken by the context that surrounds them,” Moody said.

This can be as complex as ensuring a user has an understanding of or familiarity with the experience’s goal, to whether or not a user’s fashion choices affect an experience. Fake fingernails, for example, do not conduct electricity, a crucial part of a touch-based experience.

Simple Does It

These experiences are turning places into spaces and changing the way we think about what constitutes an experience, all while helping brands and artists alike tell stories that allow users to create meaningful memories. At the end of the day though, no matter how complex the development or installation, these are still user experiences designed to bring people together. Slowly, they are changing the way people interact with each other, technology and the physical world around them.

“Give people a platform to share stories with one another and these emerging behaviors come forward,” Moody said. “Usually it’s the simplest of experiences that allows people to do that.”

How to Support Design Decisions Through Iterative Testing

Creative Cloud

A while ago I was tasked with designing new features for a client’s data-rich web app. The requirements came from business analysts who, together with the product owner, would talk to the client’s product manager and discuss the features that needed built.

This is the workflow found in many service-based companies, resulting in the business analysts and product managers becoming the customer needs experts due to the lack of access to the end user. The immediate consequence is their personal views become the assumed views of the customer, as there is no data to support a different perspective. Therefore, decisions are made based on what business analysts think the customer wants, rather than what they actually need.

This is one outcome of the shift towards an Agile culture that sets the focus on continuously building new features, rather than testing and researching which ones are actually worth the development effort. The opportunity cost of not investing time in research to determine what features are actually needed is much higher than a couple of days’ work to prepare low-fi concepts.

In an Agile environment, the UX designer has to constantly deliver solutions for the current iteration while thinking of the remaining items in the backlog, and preparing the concepts for the next sprint.

The Pitfalls of a Continuous Design Process

The continuous design process is a double-edged sword. On one hand it provides constant challenges to designers and keeps them focused on solving design problems, but on the other it doesn’t allow time for feature validation through proper research and testing.

The need to be on schedule and deliver at the beginning of the sprint forces designers to cut corners and to rely on gut feeling and personal experience instead of valid research data. And after a feature is released, it is assumed that the client will use it, allowing the team to jump into the next item in the backlog.

In this scenario, we could go on about the lack of support for user research, but the problem is created by Agile development practices that promote feature factories. Believing that validation will come after a release is a flawed expectation as it assumes that if you build a feature, customers will use it.

costIn many cases, even if a clients requests a feature, if its functionality slows down a user’s productivity, they will simply ignore it.

Unless designers coordinate across all teams to keep users at the center of their design and development efforts, they will most likely release features that will end up re-worked or dropped.

Unsuccessful features impose an even greater cost than the building them in the first place.

The code written to offer that functionality adds to the complexity of an app, and with each feature the cost of carrying this extra “weight” is higher.

With each new feature released, it becomes harder to modify and test an application.

Support Design Decisions Through User Research

To avoid unnecessary complexity, we must support design decisions based on both qualitative and quantitative research. All the more so if your team adheres to an Agile design and development practice. In this instance, you need to adapt your methods to suit the lack of time and resources that classic user research demands.

There is a wide range of user research methods available, and the Nielsen Norman Group has done a great job in classifying them by 3 dimensions:

  1. Attitudinal vs. Behavioral
  2. Qualitative vs. Quantitative
  3. By Context of Use

Qualitative research generates data about user behaviors and attitudes, based on observing them directly. This usually happens in usability and field studies where the researcher directly observes how individuals use a specific product.

Quantitative research collects data indirectly through methods like surveys and analytics. Quantitative research captures large amounts of data that can be analyzed to show the scale at which certain issues appear.

While quantitative research is focusing more on the behavioral dimension, the qualitative approach tries to understand and measure people’s attitudes and mental models when using a product.

Choosing which method to use, and when, is difficult due to the adaptive and evolutionary nature of Agile methods. Depending on the phase of the development process, whether it’s execution or assessment, a lean approach that uses concepts to test assumptions is recommended.

By relying on a build-measure-learn feedback loop, the team can quickly prototype a new feature, measure how clients respond, and learn whether they need to make improvements or completely re-work the concept.

The RITE Method

One of my preferred methods to test concepts is the RITE method (Rapid Iterative Testing and Evaluation), an iterative testing method in which changes to the UI are made as soon as the usability test reveals a problem for which a solution can be proposed.rite

Usually this happens after observing the behavior of the first participant and the team decides whether improvements to the concept need to be made before the next round of testing.

After that second round, the team continues to run tests and modify the prototype iteratively, until the desired result is achieved and the feature is ready for development.

The disadvantages in using this method in a pure form in an Agile context are:

  • the time necessary to make adjustments to the prototype in between testing sessions can affect the development schedule.
  • not having a clear deadline when the testing will be finished, as it depends on the type of issues found with each round of testing.

Solving User Testing Time and Scope Issues

In an iterative process, low fidelity design tasks replace the traditional design phase. By using low-fidelity wireframes in the testing sessions, much of the functionality in the concept will be incomplete.

The risk is that during the first test, the client might point out those incomplete parts as issues and disregard the areas that the researcher needs validation on.

The best way to test in with low fidelity wireframes is to keep the focus on specific sections and test them in an isolated context.

By focusing on smaller components, or even just parts of a bigger feature that will be developed in several sprints, the total number of potential issues is minimized.


In just a couple of days of user research, designers can identify the biggest usability problems in their concept. This activity protects the business from investing time and resources in features that distract and damage its long-term goals.

This adapted RITE method allows for testing and reiterating to be completed in just a couple of days, empowering designers to conduct tests in every sprint. By running tests on small components every two or three weeks, UX designers can align themselves to the development schedule and push design improvements faster.

July Update of Adobe Experience Design CC (Preview)

Creative Cloud

We are excited to announce the July update for Adobe Experience Design CC (Preview). With each release, we’re constantly adding new features and improving the existing ones, based on your feedback. In this July update we’ve focused on bringing you the highly requested Zoom tool, improvements to export and enhancements to defining prototype flows.

To try out these new features, download the latest version of Adobe Experience Design or update it via the Help menu. (Help > Check for Updates)

What’s New in this Update?

Zoom Tool

As a designer, having the ability to zoom in and out of your artwork is critical, especially while working on details. In this update, we’ve introduced a new Zoom Tool located in the left toolbar. You can click and drag to create a rectangular marquee selection to zoom in on a specific part of the artboard.

Zoom Tool

Marquee select to zoom

If you are working with a trackpad you can pinch to zoom. If you are working with a mouse, you can use ALT+Mouse Scroll to zoom. You can also access zoom tool temporarily while panning (via SPACE) by adding CMD modifier. This is especially useful for quick navigation and switches back to your active tool once you are done zooming.

Zoom Tool Shortcuts & Tips

Zoom Tool Z
Temporary Zoom SPACE ⌘
Zoom In ⌘ +
Zoom Out ⌘ –
Zoom To Fit ⌘ 0
100% ⌘ 1
200% ⌘ 2
Zoom in one step Single-click
Zoom out one step OPT/ALT + Single-click
Zoom Tool

Zoom Tool in action

Go To Previous Artboard

Prior to this update, there was no way to create a true “back” button in your prototype that would return a user to the last artboard viewed. With this update, we have introduced an additional property in the target dropdown – “Previous Artboard”

By using this new option you can easily create a link back to the previous artboard without having to manually locate and draw a wire to it. Then head over to the Prototype mode and select “Previous Artboard” from the Target dropdown. When returning to the last visited artboard, the previous animation will automatically reverse and the duration of the animation matches the duration of the previous one. This is another great example of how important your feedback is to our team, helping us to define the workflows you need.

Previous Artboard

Select – Previous Artboard from the Target dropdown.

Previous artboard

Using Previous Artboard you can quickly create a back button.

Export PDF

With the July update, we are expanding the ways in which you can export artboards and assets. In addition to PNG and SVG export, XD now supports PDF export.

You can now export all your artboards or selected assets as a single PDF document or as multiple PDF documents. While exporting your artboards pick PDF from the Format menu in the export dialog. This new addition to the export options, should enable you to easily share, present and review your designs with stakeholders. We look forward to your feedback on the export options to further enhance and improve it.

PDF Export

Export options now support PDF in addition to SVG and PNG.

Copy & Paste from Sketch and other tools

Now bringing in content or copy from other applications is easier. Use the copy function (Cmd+C) to copy content from apps like Sketch, PowerPoint, InDesign — or any other tool that puts PDF onto the clipboard. A simple Paste command (Cmd+V) places it into your XD file. The pasted content will be editable in XD.

We’d love feedback on this feature if you find that your content does not import correctly into XD!

What’s coming soon?

Linear Gradients

We know how important gradients are and the team has been busy working on a cool new gradient editor. We couldn’t resist, so here is a sneak peek of what’s in store.

Linear Gradients

Sneak peak – Linear Gradients.

Korean Support

A while back, the XD team visited design agencies in Korea. They spent time understanding the kind of the challenges and workflows they deal with while working on screen design projects. As a result of that effort, we are thrilled to add support for Korean in the next release.

Be a part of this Journey


Big shout out to everyone who downloaded XD and sent us feedback. Your continued support in the form of feedback is absolutely valuable to the team. We constantly strive to build a better product that you designer will love. Visit our User Voice feedback site to suggest features, up vote ideas or report issues. As well as our Forum to participate in discussions with the community.


You can follow our handle @AdobeXD for updates or reach the team on Twitter using the #AdobeXD. You can also talk to us using Facebook where we answer questions during live sessions, share videos and other updates.


While sharing your prototypes on Behance, don’t forget to tag them with #MadeWithAdobeXD and select Adobe Experience Design under “Tools Used” for the opportunity to be featured in the Adobe XD Newsletter. Check out Sabine Schwarzkopf’s  Tralley app project on Behance designed using Adobe XD.

Events & Articles

The Adobe XD team visited UXcamp Europe #UXCE16 in Berlin on June 25/26 where attendees tested Adobe Experience Design CC. See what they had to say about the workshops

Video – Adobe XD at UXcamp Europe 2016: Parth Nilawar

Video – Adobe XD at UXcamp Europe 2016: Joachim Tillessen

Reading – What if there were a formula to create innovative products, experiences or services? One of our product managers, Demian Borba, shares a manual for innovation.

Please download the latest version of Adobe Experience Design and let us know what you think. Looking forward to hearing your thoughts.

Carolina Niño’s Technicolor Dreams

Creative Cloud

Carolina Niño is a Colombian-born artist who creates vivid, geometric collages inspired by natural elements. She recently created a Photoshop Tutorial on how to create a fantastical collage inspired by images from Adobe Stock. We spoke with Carolina about her journey to finding her signature style and the importance of creative space.

Carolina has always been surrounded by art and nature. She comes from a family of artists, and growing up in the Colombian countryside, nature became Carolina’s first and lasting source of inspiration. Even as a city-dwelling adult today, she frequently travels to reconnect with the elements.

After completing her education in graphic design, Carolina began her career at at agency but felt stifled by the controlled environment. Craving a change, she moved to Spain, where she lived with like-minded artists and musicians. In this new, creative atmosphere, Carolina’s artistic sensibilities began to blossom. It was a time fueled by curiosity, a period of learning. “At that time I was just absorbing, I didn’t have any style at all,” she admits.

sample 5

The concept of a creative atmosphere also resonates in Carolina’s art. In the same way that people need space to grow and thrive, so do the elements of a collage. As Carolina explains, it’s impossible to focus on a single aspect of a crowded visual, and so she places utmost importance on the perfect balance of colors, shapes, and space. In a way, “the space between one object and another is more important than the object itself,” she says.

sample 4

After her transformative years in Europe, she returned to South America with a renewed creative perspective. She picked up her first computer and started experimenting with Photoshop. Mixing and layering images came naturally to her, and she began to master the perfect balance between colors, shapes, and space.

Around the same time, she discovered Behance. “I wanted to be like those artists on Behance,” she shares, but she found the task of catching up to these established artists to be a daunting challenge. Rather than trying to emulate their success, Carolina shifted her focus to her craft, learning from tutorials and bringing together her experiences to cultivate her signature style.

sample 2

And her strategy worked – Carolina’s work has been featured in both the Illustration and Graphic Design galleries on Behance, and her projects have a combined view of over 72,000.

Carolina’s art is an extension of her experiences in life. Each piece comes together to create a complete, balanced image. “Life is like a big collage. You travel and you put pieces together to create something.”

Follow Carolina’s step-by-step tutorial to create your own digital collage with images from Adobe Stock.

Check out more of Carolina’s fantastical work on her Behance portfolio.