Spectrum is now read-only. Learn more about the decision in our official announcement.

Product Design

Are you a product designer working in tech? This community is for you.



November 20, 2017 at 6:27pm (Edited 5 years ago)
In this post, I’m going to lay out a case for an intermediary format between design and engineering tools to enable more efficient, capable tooling for product teams generally and designers especially. This proposal is based on a series of conversations I’ve had with small groups of designers and engineers over the past year or so. 
If we can provide appropriate context, our computers can do the work of translation between design and development processes for us. This should remove a significant amount of work and confusion between teams and enable individuals and companies to focus on harder, more important problems.

The purpose of UI design

A software company’s purpose is to solve some problem for a user. An effective solution can be enabling new behaviors, making existing processes easier, reducing cost, or simply giving people something they enjoy. For any of those things to happen, a product needs to ship to an end user.
This maps directly to the purpose of any interface designer’s work - which is to compose an interaction layer for software that enables users to access solutions to a given problem set. If a user is unable to do so, the designer of that software is not done yet. Therefore, a designer cannot be done until the product has shipped.
As a note, I’m distinctly separating “interface design” and “illustration”. Illustrations often appear in interfaces or alongside products, but the processes and tools necessary for successful instances of each are enormously different. If you call yourself a UX designer, product designer, interaction designer, or something else where your output is meant to lead directly to the production of user-facing software, I’m putting your work in the bucket labeled “interface design”. If the final product of your work is an image, icon, illustration, typeface, advertisements, print media, or some other visual communication medium without native interactivity, I’m putting that work in the “illustration” category.

The problem with UI design tools

Every popular design tool available today is optimized for illustration. The tool’s marketing site and documentation might talk about the intersection of design and engineering, provide tutorials on how to compose design systems, and the artboards might come in common screen sizes, but they are ultimately optimized for drawing pictures. For proof of this, look no further than the toolbar of your current design tool:
Pen tools were designed for illustration. Shape layers were designed for illustration.
A common argument for the current model of design tools is that they allow you to directly manipulate layers to create quickly and accurately. For UI design tasks especially, this is a false assumption. They’re an imperative abstraction on top of a set of properties determined by inappropriate primitives - rectangles, ellipses, vectors - for the task that is required of them. Because of these primitives, the material output of design tools based on visual primitives will always be an image at best. This is great for doing icon work or spot illustrations in your UI, but adds significant friction to the process of building the interface itself.
A quick example: some design tools attempt to provide a basic description of your UI in code. The problem is that your tool thinks the button you designed is a rectangle with rounded corners, not a button, so any code it provides is only marginally useful - if at all.
The design tool’s code output is antithetical to how components are built in production. As a frontend engineer, there is no case in which I would paste this into my codebase. Some of the values are useful, but the property declarations don’t have enough context about the intent of the design to provide significant benefit to the programmers they’re built to serve.

What You See Is What You Get

For a long time, there have been WYSIWYG engineering tools that market themselves as design tools - Webflow, Dreamweaver, RapidWeaver, and Macaw are all examples of this. These are outside the scope of this conversation because they focus on code output for specific platforms with specific methodology. This kind of tooling is great for freelancers or solo designers who need to design, code, and ship by themselves (especially if they don’t have the benefit of being able to take the time to fully learn to code) but they don’t enable teams at scale or effective cross-platform design.

Common inefficiencies in design

UI designers love doing busy work and our tools are wonderfully effective at this purpose.
  • Manually nudging and meticulously adjusting alignment are inefficient methods of creating consistent spacing across an application.
  • Reorganizing groups, artboards, pages, and files rarely contributes to shipping.
  • Drawing flow charts to indicate user paths and routing is a massive timesink and the maintenance cost is significant.
  • Redrawing elements from an app after you find out your static design wasn’t accurate to production rendering is a repetitive process that leads to confusion on future iterations. 
  • Attempting to manage design systems manually across incompatible formats is a full-time job that always results in inconsistencies.
  • Iterating on designs without testing them with users lends itself to product decisions that are detached from the market’s requirements. 
  • Attempting to emulate system elements by building UI kits and sticker sheets is recreating work that is already done more accurately elsewhere.
We, as an industry, lose hundreds of thousands of hours per year on these mundane, tedious, ineffective tasks. We gain nothing new from them. We don’t get to work on harder, more important problems because we’re expending our time and effort on minutiae that feels like progress. All of this tedium is a direct result of the separation of design and development processes.

Getting on the same page

The goal of any UI design tool should be to provide visual controls to enable anyone to ideate and create interfaces quickly. To be more effective, we need data interoperability between design and engineering tools. If your UI design tool knows that it’s making UI for a given platform, you can build with elements that map directly to components in your codebase. Once there’s correlation between properties (whether directly or due to custom mapping), components can be passed back and forth freely between tools, effectively integrating the design and development processes into one cohesive, omni-directional unit.
This interchange system should be a simple declarative model based on UI-specific primitives (think button or input vs rectangle or ellipse) with only slight abstraction over the actual properties if any.
Example: I’m a designer working on a sign-in modal within an app for web (in React).
In the instance of the “Name” input above, a design tool sees a group called “Input - Name” that contains a rectangle and two text layers - one for the text value and one for the placeholder text. 
The React component sees an HTML <input> element with several inherent properties and states available to the programmer. Each state needs to be designed, but unless I go to the MDN page for each element and read through its API properties, I likely don’t understand the full scope of the work I’m setting out to do. Which means I’m really just designing the states I can think of off the top of my head.
On one hand, I’m effectively designing blind but for my own experience. On the other hand, the tools I use for programming tell me everything I want to know about what I’m building - which properties are available, the syntax for them, and they even try to autocomplete properties for me! These features help me to avoid mistakes and tell me what kind of data each element needs - but design tools have never had such a mechanism. 
What if there was a UIKit syntax plugin for your design tool that added new iOS-specific primitives so you no longer need to draw buttons by way of rectangles, but instead choose an actual UIButton in the toolbar, insert it into a View rather than an artboard and simply begin adjusting its actual native properties?  Your design tool could then:
  • tell you that the icon you’re trying to put inside your button should be 25px × 25px when you’re designing @ 1x (resolution probably doesn’t even matter anymore for most tasks) and automatically configure the export settings for the asset and push them to your team’s asset library
  • let you know that you still need to provide highlighted and disabled states for your button and then show you all your states side-by-side when editing them for easy reference
  • automatically apply default padding when you put your button into your UINavigationBar or automatically reconfigure your tab spacing when you decide it should go in the UITabBar instead
  • allow the button to accept a route property that points to another View in the app to navigate to when you click on the button
  • auto-generate a flow-map for your application based on route properties
  • handle routing and TabBar behavior when you decide you want to see how it feels. Maybe even compile a basic version of the interface so you can test it in Simulator or on your phone.
You as a designer become much more effective because your tool actually has some concept of intent.

Interface: a proposed intermediary format

An ideal interchange format would be a description of an interface as a data object that functions as a translation layer. This could be in an industry standard format such as JSON where values could be mapped to appropriate properties on a case-specific basis determined by which platform or language you are intending to ship to. Components can be described by diffing component and state instances against their default properties - essentially exposing component inheritance. By moving your source of truth to an open source, moderately-abstracted textual description, both sides get many benefits.
  • existing syntax definition packages could be used as a way to determine new sets of primitives
  • tool publishers can choose which properties to read and write enabling specialized tooling for specific purposes (e.g. prototyping tools could write animation or routing properties, while icon design tools could write asset references, and dev tools could write function or data API references)
  • references written by one tool could be used by another tool (e.g. data references provided by engineering tools could be read by design tools to ingest data from a production API, style properties could provide styling code to the engineering codebase)
  • generated code could be modified to support your team’s code formatting, modules, and components so output from this system could feel like code written by another team member
  • as a text file, it can live in the same version control repo as the rest of the application’s codebase
  • linting, testing, and formatters can be used to find errors and format code prior to committing to a repo
Here’s an idea of how I think it could be structured:
// interface.json
//tell your tools which components/syntax to load
"platform": ["iOS-UIKit", "myTeamDesignSystem"],
// UI components are defined by making declarative property changes against system primitives or imported from existing definitions (e.g. design system)
"components": [
row": {
"primitive": "UITableViewCell",
"properties": {
"padding": "16px",
"tint": "{myStyleGuide.palette.blue60}"
"children": [
"primitive": "UIImageView",
"properties": {
"aspect": 1,
"src": ""
"primitive": "myColumn",
"children": [
"kind": "text",
"type": string,
"data": "{{name}}"
"kind": "text",
"type": number,
"data": "{{phoneNumber}}"
favorite" : {
"primitive": "UIButton",
// application is defined as a data object with references to external sources
"application": {
"name": "Phone",
// routing and navigation can be defined at the root level (some will need to be defined at lower levels)
"root": {
"kind" : "tabs",
"screens" : ["favorites", "recents", "contacts", "keypad", "voicemail"]
"screens": {
"favorites" : {
"component" : "UITableView",
"content" : [
"component": "favorite_
"content": {
"name" : {myData.user[n].name},
"phoneNumber" : {myData.user[n].phoneNumber}
"route": "favorite_
"recents" : {
"contacts" : {
"keypad" : {
"voicemail" : {
If we can get our different toolsets talking to each other, we can work together more effectively with shared language and less overhead. Designers can have as much freedom as developers do to move between specialized tools to accomplish a purpose, without loss of work or fidelity. I believe this needs to be an open source solution so that tools can be built and maintained publicly though manufacturers could define new properties and adapters by means of vendor-prefixes if a property isn’t standardized.
This interoperability would hopefully do for design tools what it has done for engineering tools. Our tools should be better by being able to specialize while contributing to a central unit, thus allowing us to stop building the same massive feature sets as every other tool and instead specialize and move the field forward. You should be able to move from any design tool to any other design tool with minimal configuration.

Where we start

Right now, some features of this could be built on top of symbols and component systems in tools with plugin APIs. That would be hacky, but it’s possible. 
There have been attempts at similar interchange formats in the past, but they have traditionally been tied to a single manufacturer, so logically, this probably starts small with up-and-coming, community-driven projects and open source IDE plugins leading to successively larger tool publishers coming onboard and eventually organization of a core team or working group to manage an extensible standard if it gains traction. If you work on design tools or maintain syntax packages and plugins for IDEs and you’re interested in driving this forward, please get in touch. It’d be awesome to see the our industry make some huge leaps forward so we can focus on bigger problems.
This is by no means a finished document, but hopefully a starting point for a larger conversation that leads to real improvements for our industry and makes us all better at what we do. Please jump into the chat below and share with others!
Thank you to everyone who has helped inform and develop these ideas over the past year: Payam Rajabi, Sam Soffes, Diana Mounter, Max Schoening, May-Li Khoe, Adam Michela, Josh Puckett, Carmel DeAmicis, Sho Kuwamoto, Kris Rasmussen, Rasmus Andersson, Daniel Burka, Josh Brewer, Tom Moor, Kevin Smith, Marc Edwards, Drew Wilson, Danny Trinh, Brent Jackson, Adam Morse, Ben Wilkins, Jon Gold, Lucas Smith, Pasquale D’silva, Jake Marsh, Rylan Barnes, Vlad Magdalin, Linda Pham, Sergie Magdalin, Barrett Johnson, Koen Bok, Jorn van Dijk Brian Lovin, and Max Stoiber
Show previous messages

November 22, 2017 at 11:41pm
indeed - seems to be same intent with slightly different goals (theirs seems to be aiming for cross-platform with same data. I'm a little skeptical of that piece.) but they definitely emphasized the omni-directionality of flow across tool, which I think is very important.

November 23, 2017 at 8:43am
This has been on my mind for some time too, but this write up is much more thorough than I could've imagined. Thanks for that !
At the Sketch Plugin Hackathon that we organized in Berlin there was a team that made a proof of concept of the `.svgsymbol` file format that aims for omni-directionality and cross-platform design/dev. Even though the PoC was limited to working with Sketch & Framer, I can see it as a first step (like you mentioned) that could go in this direction for building on top of the current structures of symbols/components and map that to a codebase.
And the recording of the hack demo here:
What do you think?

November 27, 2017 at 8:48am — looks relevant (mostly in Russian, but at least you can check the syntax)
great post thank you - I'm currently having this exact problem with our design to engineering and back again, in some cases engineering may change some elements and then create some 'design debt' that we need to update back in our design system within Sketch. We've looked at the React Sketch plugin which would help but its not great, this would definitely be the direction forward!
As a designer whose full time job is basically updating and maintaining a UI library alongside a custom CSS framework and Angular module architecture... this resonates with me on a deep level. Will be paying close attention to the conversation in here! Thanks for kicking it off
: seems like it's a step in the right direction, although I still think svg primitives are the wrong ones.

November 28, 2017 at 3:17am
This is a great breakdown and explanation about the current state of design tools—I'm definitely sending this to my engineering friends. I believe Airbnb has been tackling this for a while I believe, they recently released their developer preview for Lona
I went to lunch with the Airbnb design tools team the day before publishing and got a lil heads up on Lona. Excited to see where it goes!!!

November 28, 2017 at 8:25pm
Just wanted to say I find this discussion really valuable and important. I've recently struggled first-hand with the huge challenge that is trying to get a team to come together around an intermediary JSON format that describes an app's UI.
As someone who works on both sides of the fence I totally agree that, for interaction designers, the workflow of handing off pictures to engineers to build from scratch simply has to mature. It's inefficient on so many levels.
Interesting article. I don't disagree with your points, but there are multiple layers to a design process, and this approach is very broad and several moving pieces to orchestrate which makes it much harder to come up with a solution.
I've taken to focusing specifically on one part of our design-2-engineering handoff process. Primarily around how we are naming our components between design and engineering. Most of this contract/naming happens inside of sketch before we start building we've had a discussion about how all the components are going to be architecture from Brad Forrest `Atomic Design Pattern` which we've now built our own set of yeoman generators around to make prototyping/building even faster.
I've been in-progress in writing an article about our process and publishing all our code. And then I stumbled onto `styled-components` and yet more changes in the React eco-system. I'm now realizing that I have to yet reapply a new lenses to all these changes again in the technology world...... So anything we build/propose has to be really flexible and not prescriptive as the underlying technology implementations change so rapidly that tieing it in such away means we are always starting from scratch.
I was reading the documentation on [Airbnb’s Lona]( and I was reminded of what I loved about Sketch in the first place. ‘Sketch pioneered an incredibly effective workflow for rapidly iterating on ideas. The infinite canvas, instant artboard duplication, and intuitive hotkeys are key to translating an idea into digital form. Designing in Sketch should be *easy* and *playful*. Designing in Lona Studio [or Interface 😜], by contrast, is intended to be *powerful* and *precise*.’
, do you envision Interface being all of the above: easy & playful, powerful & precise?
I disagree strongly with the concept of Sketch's value being playfulness, though ease is ideal in any step of an engineering process (which I consider design to be part of). To be clear though, Interface would not be a client like Sketch or Lona Studio. It's a proposal for a data format for tools to hook into.
i'd say it should be precise and extensible (which you could arguably correlate to 'powerful').
There's nothing playful about a data format, and 'easy' is too subjective in this context. Example: should it be human-readable - which I'd say would make it "easier" to work with in one way? I'm not sure what size these documents would grow to, so that might actually make it harder because it's not optimized for filesize if that's the case which could make it slower to work with and thus not "easy" in that way...

November 29, 2017 at 7:44am
I think that you have a very interesting idea. The only problem that I see is the number of stakeholders required to get together to make this work is going to make things hard. Perhaps a better approach would be to instead focus on the one thing that is not changing namely the user, and how they interpret the content on the page. Providing priority maps to indicate what users should see in what order. Context maps to provide clues as to what pieces of the puzzle should go together. If they are not together then the content makes less sense. With all of this you could then have a system which decides on the size and flow and orientation of the UI based on algorithms. You are then able to use the pen for its intended purpose generating static content. While the tiling system removes the pen in favour of a dynamic system. I should also add when I say algorithms I am not talking about code more forces that should apply to the tiles and scaling factors on each of the tiles. You could then also have inheritance of these properties if you really wanted to.

November 29, 2017 at 10:36pm
I think you're trying to solve an unrelated challenge, . I agree that it's complex and there are a lot of stakeholders, but that doesn't make it less important to solve.

December 4, 2017 at 3:33am
Would it be easier to start with the developer tools? iOS And Android each have UI builders. Could they be made more designer friendly?
Interestingly, I don't think they're particularly dev-friendly either. Do you know any devs who actually enjoy using Interface Builder?
That said, I still think the core of the problem is a lack of communication between specialized tools with Interface Builder being a prime example of trying to overextend a single tool beyond its value.

December 4, 2017 at 1:56pm
I don't know if they particularly like it, but they all seem to use it. We're trying to get our entire design team working exclusively in our prototyping tool with widgets styled to match our guide. It will be easier once the guide is done. The end result won't be usable code, but it will be documentation that calls out a button as a button. Our other goal is to only stray from those defined widgets when we absolutely have to.
Maybe it's environmental. Designers in South Florida aren't expected to be front end developers, and for the most part companies don't have enough engineers on staff to implement novel UIs with lots of animations. At least, not the ones I've spoken with. (except for Magic Leap their fancy office that I pass by)
Hmmm. I don’t think most iOS engineers use IB. Anecdotal, though.
Show more messages