Working with Immersive VR Video

Immersive video has been around in different forms for decades, however it hasn’t been able to break through on a large scale until recently.
The most common VR video solution places the viewer inside of a sphere, and wraps a video stream around the sphere. This solution can be even more immersive when using stereoscopic video, allowing the viewer to see a unique stream of wrapped video for each eye. This is the sort of viewing experience provided by YouTube, Google Cardboard and Facebook.
The key to this solution is capturing, formatting and encoding in such a way that it can be easily wrapped around a sphere. The easiest way to accomplish this is through Equirectangular Projection.
What is Equirectangular Projection?
If you think about the surface of a sphere, a single point is defined as having a latitude and a longitude coordinate. Now think about a standard video frame. A video frame has a width and height, with points defined as X and Y coordinates.
Equirectangular Projection simply unwraps the sphere, mapping the longitude to the X coordinate, and the latitude to the Y.
Here is an example we are all familiar with, the globe laid flat:
Example of Equirectangular Projection
Notice that the image widens and it progresses towards the poles. This is because the spatial distance between longitude positions decreases the closer you get to the pole. The smaller circumference of a line of latitude close to a pole is stretched to the same distance as the latitude at the equator.
This distortion makes it difficult to comprehend what an Equirectangular image is composed of. In addition, you’re seeing far more information than you’re typically accustomed to, and its position is not intuitively defined. The object at the far right of the frame is actually behind the camera, and the object to the far left is directly to the right of it.
Here’s a real-world example of a video frame with Equirectangular Projection:

The Golden Gate Bridge catches our eye because it’s towards the center of the frame, but when we shoot immersive video, we want to be aware what’s in the entire scene. Looking at the frame, it might not be apparent that there’s a fence behind us. Or the brave soul leaping from that fence behind our left shoulder!

To tell a good story, editorially, you need to be aware of what is going on in the entire frame. This is even more of a challenge with Equirectangular Projection. Adobe Premiere Pro CC 2015.3 provides you with the tools necessary to look around the sphere, simulating the view you might have through a headset or watching on a desktop viewer such as YouTube.
Editing Equirectangular Video Is Not as Hard as You Might Think
Even though working with Equirectangular Projection can be a challenge, it’s not that different from any other video format. Most of your editing skills, tools and tricks are relatable. Trimming, splicing and track manipulation all work the same. Even many of the most commonly used effects work well with Equirectangular Projection.
One feature new in the Premiere Pro 2015.3 release that can be especially helpful with immersive video: Proxy workflows. Some Equirectangular video can be as large as 8,000 pixels wide by 8,000 pixels high – much larger than most other frame sizes. Video of such high resolution is very difficult to decode in real time, even with hardware acceleration. Premiere Pro’s new Proxy workflow allows you to ingest this material, begin editing with it, while lower resolution 2K versions are generated in the background, ready to take their place on the timeline with the click of a button.
Enabling the VR Video Display
By enabling the VR Video Display in either the Program or Source monitor, Premiere Pro allows you to step inside the sphere and view the video from a user’s perspective. Using VR Video Display, you can simulate different viewing experiences with your Equirectangular Video, for example, using a VR headset such as the Oculus Rift or on a desktop through YouTube or Facebook.
To activate the VR Video Display, simply click on the settings wrench icon in the mid-lower right of the monitor, open up the “VR Video” sub-menu and select “Enable”.

There is also a new monitor button which quickly toggles the VR Video Display that you can easily add to your monitor’s playback controls. To add this button, begin with opening the Button Editor by clicking the “+” icon in the far lower right of the monitor and then drag the button titled “Toggle VR Video Display” to the desired poition under the monitor display.

You can also assign a key of your choice to toggle the VR Video Display from either or both of your monitors using the Keyboard Shortcuts dialog.
Viewing Immersive Video in the VR Video Display
Once you enable the VR Video Display, your view is now from inside the sphere and your monitor will tranform into an experience like this:

The upper center is your interactive window inside of the sphere. You can immediately change your perspective by simply clicking and dragging on your view inside of the sphere, allowing you to easily pan and tilt. At any time, you can quickly re-center your perspective by double clicking anywhere within the view.
The siders along the right and bottom edges allow you to control the tilt and pan, respectively.
The numeric fields alongside each slider provide you with exact feedback of your viewing position, but also allow you to enter your own values. Positive values will pan to the right and tilt upward, while negative numbers will move your perspective to the left and downward.
At the very bottom of the VR Video Display is a radial knob which provides a more visual feedback as to which direction you’re currently facing, but it also allows you to click and drag within to pan your perspective left and right. The radial knob is unique in that it allows you to spin completely around and keep going past your starting point.
Configuring the VR Video Display
The VR Video Display works with both monoscopic and stereoscopic video. It’s also possible to work with equirectangular footage from a smaller section of a sphere. It can be configured to simulate the view from a headset or desktop, even allowing you to view stereoscopic footage in 3D using a pair of inexpensive red/cyan anaglyph glasses.
To configure the VR Video Display, simply click on the settings wrench icon in the mid-lower right of the monitor, open up the “VR Video” sub-menu and select “Settings…”. This will present the following dialog window:

“Frame Layout” lets you declare if your video is “Monoscopic” or “Stereoscopic”, the latter also allowing you to choose between an “Over/Under” or “Side-by-Side” layout. If you choose one of the stereoscopic layouts, you’re given a choice of a “Stereoscopic View”, either the “Left” or “Right” eye, along with a red/cyan anaglyph composite.
When working with immersive video, you need to think in degrees of view, in addition to pixel dimensions. The “Captured View” describes what part of the sphere is represented by the video frame. Typically, you’ll leave this at the full-sphere defaults of 360 horizontal by 180 vertical degrees.
The ”Monitor View” fields allow you to control what portion of the sphere you view while in the VR Video Display mode. This is where you can simulate different viewing experiences. For example, using a value of of 90 horizontal by 60 vertical degrees will approximate an Occulus Rift headset, while 160 by 90 degrees will simulate viewing within YouTube. One other thing to note is that the aspect ratio of the view window is determined by these settings, as well. For example, 160 by 90 degrees will present a 16:9 view window.
Working with Effects
Three of the most common effects used in Equirectangular Projection are Dissolves, Color (Lumetri) and Speed. These effects do not alter a pixel’s position enough to cause an issue. The basic rule is that if an effect changes a pixel’s vertical position, it’s unlikely to work. You have to consider how a pixel’s vertical position affects its horizontal distortion once perspective is applied. For example, Horizontal Squeeze or Wipe will work, while a simple Picture-in-Picture effect will look distorted, because it scales both the vertical and horizontal dimensions of the image.
The other thing to consider is compositing. You can composite Equirectangular imagery on top of other Equirectangular imagery as long as you do not change its vertical position and scale. It’s possible to shoot an object against a green screen in Equirectangular and Chroma key it on top of a background image. You cannot composite traditional flat video on top of Equirectangular without additional steps, and that brings us to the fourth most common effect: Titles and Graphics. In order to work with flat imagery, such as a title, on top Equirectangular, it has to be projected into the sphere. To composite flat video on top of Equirectangular, we recommend using third party plug-ins, such as Mettle’s SkyBox 360/VR Tools ( – specifically the Mettle SkyBox Project 2D effect.
Finally, there is one important built-in effect to be aware of when working with Equirectangular video, and that’s the Offset effect (found within Video Effects/Distort). Unlike shooting with a traditional camera, a VR rig has no single lens to use to focus on the subject. The entirety of the scene is captured at once. When you begin to cut together different shots, you may find that your camera changed orientation, that the subject you care most about is in a different position from the previous shot. You don’t want the user to have to twist their head to bring the subject back into view.
The Offset effect allows you to rotate the entire frame. Apply the Offset effect onto the cut segment in the timeline and drag the first number of the “Shift Center To” parameter pair left or right, depending on how you want to pan the clip. Take care not to shift the second parameter, which would violate the vertical position rule. One final note, this effect is not real-time and will require rendering. Mettle’s Rotate Sphere effect provides real-time support. It also allows you to tilt vertically and even roll. Care should still be taken when tilting and rolling the sphere, as it can degrade the image quality, purely due to the way Equirectangular projection works.
Exporting Equirectangular video is much like exporting any other type of video, with a few caveats:

First and foremost, regardless of the Format and Preset you select, be certain to change the “Source Scaling” option in the upper left of the Export Settings dialog window to “Stretch To Fill”. If you do not do this, you are likely to introduce either letterbox or pillars to the exported video, which breaks the seam of the sphere and distorts it.
If you export H.264, HEVC or QuickTime, you should enable the “Video is VR” checkbox in the Video settings tab. Next choose the appropriate “Frame Layout” that matches your source video, just as you did in the VR Video Display settings. If you enable this setting, Equirectangular video published to YouTube and Facebook will automatically play in a special VR viewer after it has been processed by the service, which can take up to an hour after upload is complete.
For very large Equirectangular frames, care must be taken when choosing a container and codec. If exporting on the Mac, you have the option of exporting using QuickTime and one of the ProRes variants.The situation is more complicated with Windows, where H.264 is your best choice. When exporting large frames with H.264 follow these steps:

Choose the “H.264” format with the “Match Source” preset
Disable the match source check boxes for dimensions, profile and level
Change the profile to “High” and the level to “5.2”
Choose “Stretch To Fill” in the “Source Scaling” setting to the far upper left, once again
Disable the aspect ratio link next to the dimensions and change the dimensions to 4096 by 2304 (or 2160 for 60fps)
Select a high bitrate such as 100 Mbps.
Enable “Use Maximum Render Quality” to achieve the best scaling results.

The following is an example of H.264 stereoscopic export settings:

In Summary
Adobe Premiere Pro has long been the standard tool used by editors working with immersive VR video. With the 2015.03 version, Premiere Pro provides you with the tools to better view, export and share VR video. In addition, the market is quickly growing for everything related to VR video: rigs, cameras, Equirectangular video stitchers, effects, VR headsets and even in-application headset monitoring. Immersive VR video is nascent and exploding. It’s different than stereoscopic 3D in that the technology isn’t being driven by a small handful of large electronics manufacturers. It’s being nutured by hundreds of companies, large and small, many working together, and Adobe will continue to contribute.

Adobe’s NAB 2016 Preview of VR Workflows:
Wikipedia article on Equirectangular Projection:
Third-party plugins:

Mettle – VR monitoring, effects:
Kolor – VR monitoring, stitching:
Dashwood – VR monitoring, effects:


Their Planet, Their Voice: Creativity Scholars and the Environment

2016 Adobe Creativity Scholars are boldly creating representations of the world we live in. This month, we kicked off a 7-part series introducing them through the themes that bring them together. The scholars we profiled last week expressed their views about human rights. Today, we introduce a group of artists who transcend adversities affecting the environment and illustrate its beauty. These creativity scholars use photography and moving images to voice their views about our planet. They are ecologists, naturalists and intimate observers amplifying the relationships between nature, people and inanimate objects.
“The world around us is ever-changing. Those who let it pass, are destined to regret it. But those who pause at the moment, and capture what can’t be seen without stopping, understand the value of time.” – Adam Peche

Agata Mroczek, 19 (Niecme, Lubelskie, Poland)            
Agata’s photographic collection, My Vision of Art, exhibits how time, people, and inventions impact nature. She asks us to live and interact with the environment in a more harmonious way: “I wish people started observing thoroughly the world, and were more sensible and sensitive to the surrounding beauty I capture in my photos.”

Adam Peche, 18 (San Antonio, Texas, USA)   
Adam reminds us of the importance of each moment. His photographs, Wandering the World, pause to recognize a specific moment in time, showcasing sensory experiences of our environment: the warmth of the sunlight, the sound of a violin, the beauty of a classic car. “The world around us is ever-changing. Those who let it pass, are destined to regret it. But those who pause at the moment, and capture what can’t be seen without stopping, understand the value of time.”  

Camila Aguirre Avila, 19 (Huixquilucan, Mexico) 
“I am an ecologist and creative millennial. I want to remind people that we need to respect nature in order to keep living.” In Camila’s creation, Dancing with the Moon, she uses the surreal image of a woman merging with the moon, the sea and the sky to illustrate our relationship with the environment and inspire us to connect with the natural world.

Dilip V, 17 (Bangalore, Karnataka, India)        
Dilip and his teammates researched and edited the documentary Garb-age. The film exposes the terrible conditions a poor community faces when a lack of garbage collection leads to health problems, mosquito infestation and an unbearable stench. A woman laments her family’s hospitalization due to the rotting field of waste outside her doorstep. Local youth organize to remove rubbish, create awareness, and provide trash, recycling and composting bins. Dilip makes his objective clear: “My goal is to make a difference!”
Lead Photo by Salwa Majoka

How To: Using Lumetri Color’s HSL Secondary controls in Premiere Pro CC

In Premiere Pro CC 2016.3, we added a new HSL Secondary section to the Lumetri Color panel. This provides additional precision color tools to isolate a color/luma key and apply a secondary color correction to it.
HSL Secondary is commonly used after the primary color correction, when you need to isolate and adjust a specific color more precisely, especially when the overall Hue Saturation Curves are hitting their limits. Typical scenarios are enhancing a specific color by making it stand out from the background or keying a specific luminance range, like a sky.
The organization of controls within the HSL section guides you through the workflow. First, set a key, then refine your key and apply a color correction at the end.
Let’s walk through the workflow step by step.
 Access HSL Secondary tools
First, switch to the Color workspace by selecting it from the workspace bar at the top of application window. Note that when you do this, the clip currently under the playhead is automatically selected so you can apply a color correction effect to it.
Open the HSL Secondary section within the Lumetri Color panel by clicking once on its header to expose the individual controls.
Pick the target color
In this particular shot, we want to isolate the pink jacket and add a little more punch to it, without affecting the rest of the image.
To pick a target color, use the “Set Color” eyedropper to isolate the pink jacket. Click on the eyedropper tool, then click again on a color in the monitor’s video image. Alternatively, instead of picking a color from the image, you can click on one of the color swatches (round colored dots) to select a preset color as a starting point.
Once you’ve isolated the color of the jacket, you’ll see that the Hue, Saturation and Lightness ranges (located just below the color swatches) have updated to reflect your color choice.
Tip: while picking a color with the eyedropper, you can hold down the Ctrl/Cmd modifier key to switch to a larger sample size.
Now you should have a good starting point.

Show, tweak and refine mask
Next, let’s tweak and refine the key. Adjust the key until your liking by manipulating the H/S/L sliders. While manipulating the ranges, the key will be toggled on automatically for you to better see the affected range. If you want the key to be persistent, click on the check box next to the key preview popup menu. By default, the key preview is set to “Color/Gray”, giving you a better view of your key, since you can still recognize the image. You can also switch to a common “Black/White” mode via the dropdown menu.
To move the entire range, click in the central area of the desired H/S/L slider and move it left/right while holding down the mouse. Use the triangles at the top of the slider to expand/restrict the range and the triangles at the bottom to feather the selected range.
You can also deselect Hue, Saturation, Lightness ranges entirely. When deselected, the entire range is included in the key. For example, deselecting the H and S ranges allows you to quickly adjust the luma range to apply a lightness key.
To reset the ranges, click the reset button below the sliders or double-click on the range slider.

Continue to tweak the ranges until the mask covers the entire desired region, like the jacket in this example. Then add some Denoise and Blur to smooth out the mask.
Apply Color Correction
Once you have a well-defined key, use the grading tools in the Correction section to apply an isolated color correction to your key.
By default, a mid-tone color wheel is displayed, but you can switch to a traditional 3-way color wheel by clicking on the icon just above the wheel.

Sliders for Temperature, Tint, Contrast, Sharpen and Saturation are also available below the color wheel to precisely control the applied correction.
For this example, we’ll use the Saturation slider to give the jacket more punch and use the Sharpen slider to make it stand out more.
The HSL section of the Lumetri panel combines with the existing tools to give you even finer control of your shots, allowing you to achieve the perfect look in just a few clicks.

Seen This Week: Adobe talks Kickbox at USA Today’s American Innovation event

Mark Randall talks about how Adobe is helping others innovate during One Nation: American Innovation. Photo credit: Jeff Meesey
A sold out crowd of over 600 people recently attended USA Today’s One Nation event at Cape Canaveral in Brevard County, Florida to celebrate American innovation, what drives it and new technology. The event was part of an ongoing series that aims to create and foster conversations on topics taking the spotlight in this year’s election.
Adobe’s very own VP of Innovation, Mark Randall, took center stage to share insights and discoveries of the importance of empowering employees to take risks. Mark told the audience, “The opposite of creativity is fear – the fear of failure.” He went on to discuss Adobe’s grassroots Kickbox program that’s become an industry model for igniting innovation, where employees, regardless of their role in the company are given the opportunity to innovate.
Other presenters at the event included Ed Hoffman, NASA chief knowledge officer; Dwayne McCay, president and CEO of Florida Institute of Technology; Christine Herron, VC expert and director at Intel Capital; Ray Soto, USA Today Network virtual reality pioneer; and Bill Gattle, president of space and intelligence solutions at Harris Corporation.
Watch the full event recording and Mark’s talk here.

Make a Masterpiece:Adobe StockとPhotoshopで名画を再生

この記事は、2016/6/21 にポストされた HOW TO (RE)MAKE A MASTERPIECE(原文執筆: Terri Stone)を翻訳したものです。
アンクル パター(Ankur Patar)氏は、イラストレーター、グラフィックデザイナー、フォトグラファーとして、インドとオーストラリアを拠点に活躍するデジタルアーティストです。パター氏の経歴は多岐にわたりますが、アドビが課したチャレンジはこれまでになく困難なものでした。それは、盗難にあったレンブラントの名画を、Adobe Stockに収められた素材のみを使用してAdobe Photoshop CCで再現することです。

ちぎれてS字型にたわんだロープを再現するために、Adobe Stockの画像を3点以上は使いましたが、正確に描くことができませんでした。そこで、Photoshopパペットワープを使って画像を適切な形に修正しました。これまでパペットワープを使ったことはありませんでしたが、この作業には最適でした。



まず、矢印で示された舟板から作業に取り掛かりました。さまざまな素材を組み合わせています。Adobe Stockで原画の舟板に似た木目のテクスチャ画像を10種類見つけ、Photoshopで作ったシェイプ上これらのテクスチャを貼り付け、変形ツールを使ってテクスチャに曲線やたわみを加え、原画の上部の舟板に見えるようにしました。「拡大・縮小」、「回転」、「ゆがみ」、「自由な形に」、「ワープ」などほぼすべての「変形」オプションを使いました。「遠近法」は使っていないはずですが、その他はすべて使ったでしょう。それから明暗を調整するために、トーンカーブ補正、レベル補正、カラーバランス補正も使いました。

失われた名画を再現するという当社のチャレンジに応えたアーティストはパター氏だけではありません。今回の作品および他の3作品についてさらに詳しく知りたい方は、ウェブサイト「Make a Masterpiece」をご覧ください。
原文執筆:Terri Stone

#AdobeFrance10K | David Magère croit en la pérennité du métier de designer d’interfaces utilisateur

Notre compte Twitter @AdobeFrance a dépassé la barre des 10 000 followers ! C’est un moment qui nous touche et nous souhaitons vous dire merci ! Afin de rendre hommage à notre communauté, nous vous proposons de découvrir chaque jour un portrait de créatifs. Ce projet, que nous avons appelé #AdobeFrance10K, consiste en 3 questions posées à 15 créatifs qui suivent nos aventures depuis nos débuts sur Twitter !
Aujourd’hui, on vous fait découvrir le profil de David Magère, UX/UI designer qui croit en la pérennité du métier de designer d’interfaces utilisateur !
1/ Pourquoi suivez-vous @AdobeFrance ?
Dans le monde du digital, tu dois rester informé de toutes les innovations technologiques et des nouvelles tendances graphiques. Adobe est la référence mondiale en terme d’édition d’outils multimédia. Les papas de la créa quoi. Et si tu ne suis pas le père spirituel des créatifs, alors tu n’as rien compris. C’est du bon sens, et nous les designers on en a!
2/ Quel rôle jouent les technologies Adobe dans votre vie de créatif ? Qu’est-ce qui vous a marqué dans leur évolution ?
D’un point de vue personnel j’utilise depuis longtemps Photoshop pour tout ce qui est traitement d’images, mais aussi Premiere Pro pour réaliser des vidéos de mes voyages. Mais c’est professionnellement que les technologies d’Adobe me sont le plus utile pendant tout mon process créatif. Ça peut paraître un peu cliché dit comme ça, mais c’est une réalité. Prenez le projet eighty-eight par exemple. Réalisé en collaboration avec Alexis Cetois (développeur), j’ai utilisé Adobe Photoshop pour dessiner les interfaces du site web, Illustrator pour ce qui concerne l’identité visuelle (logotype, icônographie…), InDesign pour les documents print, et After Effect pour prototyper l’interactivité du site. Adobe a la force de savoir se renouveler et d’appréhender les besoins de demain. Le projet Comet par exemple est une aubaine pour les designers d’interfaces à une époque où le métier requiert des besoins ultra spécifiques !
3/ Quelle(s) évolution(s) voyez-vous pour votre métier dans les années à venir ?
Les utilisateurs d’une manière ou d’une autre ne pourront se passer d’interfaces homme-machine. Cependant la tendance est à penser, aussi paradoxal que cela puisse paraître, d’une évolution rapide vers des interfaces… sans interfaces ! D’avantage de « Dis Siri ! » ou « Ok Google. » et de gens qui gesticulent dans leurs salons, et moins de « ça marche pas quand je clique ! ». Malgré ce que l’on pourrait donc penser, le métier de designer d’interfaces utilisateur ne s’éteindra très probablement pas mais les supports risquent d’évoluer très rapidement. Des écrans s’affichant sur n’importe quelle surface avec une fluidité totale ? Un Candy Crush sur papier toilette pour faire passer le temps par exemple ! Des possibilités dignes de films de sciences fiction !

Lead designer sur le projet Eighty-Eight, David Magère avait pour mission de démocratiser les codes de l’astronomie dans le cadre d’un outil pédagogique et agréable à utiliser, tout en conservant la pertinence et l’exactitude des données.

Retrouvez David sur les réseaux sociaux et ailleurs : Twitter | Site

The Importance of Model Releases

In a world full of images, intellectual property is a topic of utmost importance that must be kept in mind by professional and amateur photographers alike. John Bobeldyk, Legal Specialist at Adobe specializing in technology, user-generated content, and intellectual property rights, explains the importance of model releases in the context of stock photography and for what circumstances they’re required.
Thanks to technological advances, there is now an unprecedented number of opportunities to create photos, videos and illustrations of people. This abundance of content heightens the importance of laws and regulations that protect the rights of subjects who are being photographed. Everyone has the rights to privacy and publicity, which allows them to control the use, reproduction and publication of their image and likeness.
There are 2 important rights you should be mindful of when you are photographing people:
1. Right to Privacy
The right to privacy gives everyone the right to be left alone, which means they have the opportunity to oppose the public circulation in any medium (posters, blogs, social media etc.) of any image or video in which they appear and are identifiable. The circulation of such content without permission could be interpreted as an attack on the individual’s private life and require the removal of the image, a compensation payment in line with the damage incurred, or even criminal sanction.
2. Right of Publicity
The right of publicity relates to every individual’s right to use her image and likeness for commercial purposes or on social media. Any third party wishing to use photos or videos of another person for commercial use must first obtain permission from the subject in the form of a model release.
If you are photographing people and wish to submit and sell these image in a marketplace like Adobe Stock, you will need to get a model release from the model that will grant you such permissions.  
What is a model release? 
A model release is a formal, dated and signed document that determines the conditions for granting an individual’s image for commercial use. We advise that photographers use an industry standard model release form to capture all necessary rights.
The model release protects the photographer by showing that the model gave permission to film video or take photographs of their image for fair compensation, as well as the photographer’s right to commercially license the content. It also protects the model with an agreed-upon and well-defined photography session. Without a full model release, the use of an individual’s image can be penalized, in which case the model can demand the removal of all media, financial compensation for damages, and sometimes press criminal charges against the third party.
Photos and video taken in public places can used without a model release if the people depicted are not identifiable or recognizable (long-distance shots, subjects with their sides or back to the camera, etc.). However, a model release is required if this image is retouched to target and isolate one person, face, or even two passers-by, in order to make them identifiable and the main subject.
This article is for informational purposes only and is not legal advice or a substitute for counsel. Also, it may or may not reflect the most current legal developments in privacy around the world; accordingly, this is not promised or guaranteed to be correct or complete.

Celebrating 2 Million Videos on Adobe Stock

Offering content through Adobe Stock has been an exciting addition to help our Creative Cloud customers change the world through digital experiences. The demand for content today has never been greater. Not only is more media being created, but it is being done in less time and with more delivery types. As consumers, we want to view content at any time and with whatever means or device happens to be convenient. This means that the days of simply delivering on 3 mediums (TV, print and web) are a thing of the past.

Adobe Stock offers a way for creatives to keep pace with the amount of media that needs to be created. When you need that specific shot to tell your story, many times it is easier and more cost effective to license it rather than go and shoot it yourself. For example, as a person based near New York City, I can easily travel to get a shot of Times Square, but it is actually more cost effective for me to license it from Adobe Stock than to get that shot myself.

Other shots like the ones below are simply too difficult to get unless you are specifically shooting for a project. Adobe Stock can help you find that precise footage you’re looking for.

I’m pleased to announce that Adobe Stock recently passed an important milestone – adding our two millionth piece of video content to the library. It’s a big achievement and incredible given that we reached the one million mark not too long ago.

Considering today’s demand for content, there has never been a better time to start contributing to Adobe Stock. We are constantly expanding our collection with new and compelling videos, and Adobe Stock’s integration with Creative Cloud software means you have the opportunity to market your work to millions of creative around the world. If you are interested in becoming a contributor, learn more here.

To help and assist existing and new contributors, we’ll be posting a shots want list in the near future. Stay tuned!

#134 新「あきこの部屋」第1回 RANA UNITED

クリエイティブ業界に幅広い友人を持ち、Adobe Creative Stationでもおなじみ齋藤あきこさんが、ホストになってクリエイティブ仲間を呼んでお話を伺う新コーナー。記念すべき第1回目のゲストは、日本のインタラクティブ広告の草分けRANA UNITEDからCreative Jam Tokyo vol. 2でもJammerとして参戦頂いたRaNa extractive、そしてRANA 007のクリエイターのみなさんが登場!あの広告、この広告は一体どうやって生まれたのか!?必見です!
ゲスト:RANA UNITEDのみなさん

5 Key Steps to Modernizing Your Human Capital Management

In today’s unsettling and hypercompetitive economic climate, success depends upon the people you hire and the people you keep. To help their organizations improve at this, HR executives are looking to digital platforms to more effectively attract and retain needed talent, as well as to manage services and requirements in a more streamlined way.
The goal is to bring about a corporate culture that emphasizes talent management that adapts to changing markets and opportunities for the business. Today’s constellation of digital HR solutions—enabled by technologies such as cloud, e-signatures and data analytics—provides a more complete picture of the employment lifecycle and helps bring together formerly siloed knowledge and work practices of enterprises.
It’s important to note that digital technologies by themselves don’t magically transform HR processes. Modern HR, built on a foundation of digital platforms, requires new thinking about the role of human capital management in competing in today’s global economy.
The following are ways HR executives can help lead the digital revolution and assume greater leadership roles within their organizations.

Rethink HR processes and workflows: Digital HR streamlines processes, often reducing the time to complete projects from days to minutes. For example, performance reviews, which involve distributing printed checklists to involved parties, can be streamlined and consolidated online. As a result, HR executives will see their roles elevated to that of enterprise leader and advisor, especially at a time when organizations are seeing severe imbalances between skill shortages and changing priorities.
Focus on interactivity, and maintain the human touch: Nobody wants to have to answer to a soulless machine. As with any pursuit, interacting with technology should bring people closer together, not isolate them. A highly interactive and informal social media presence—perhaps involving a high-level executive—may go a long way toward opening communication channels. Technology interfaces and solutions should help people not only to do their jobs better but to enjoy their jobs more.
Think in terms of data: HR executives and their business colleagues need to adopt analytical thinking in their decision making. Ultimately, the ability to tie results to the business is the most powerful role for HR. Successful staffing, compensation and performance decisions and strategies are increasingly driven by metrics, available through today’s digital HR offerings.
Look at cloud options: The cloud offers many capabilities to businesses, including human capital management. Modern HR is highly agile, and capabilities can be acquired from a company’s own data centers or through cloud-based services, such as Software as a Service. 
Raise awareness and provide training: Modern HR brings the entire enterprise into many of the processes once hidden away within HR departments—talent management, recruiting, hiring and more. It’s important to communicate how modern HR will help make managers’ and employees’ jobs easier, as well as improve relationships across the board, including with the job applicant community. It may be a relatively straightforward process to adopt cloud-based HR solutions that replace legacy systems, but transitioning HR staff may require different workflows and processes. Reevaluate training approaches, including shifting the HR team’s emphasis towards an integrated view of the entire environment—not just slices of specific tasks.

Digital adoption is changing the way organizations do business, both on the outside and inside. As the shift to digital accelerates, HR executives have the opportunity to significantly increase the value of their roles within their enterprises.

Register for the webinar on 8/4 to learn how to turn top talent into a strategic business force. Listen as I sit down with experts from Workday and Driscoll’s to discuss the impact of HR tech.

Start your free trial of Adobe Sign today!
Subscribe to keep up with all things Document Cloud.