Spectrum is now read-only. Learn more about the decision in our official announcement.

Product Design

Are you a product designer working in tech? This community is for you.



November 20, 2017 at 6:27pm (Edited 4 years ago)
In this post, I’m going to lay out a case for an intermediary format between design and engineering tools to enable more efficient, capable tooling for product teams generally and designers especially. This proposal is based on a series of conversations I’ve had with small groups of designers and engineers over the past year or so. 
If we can provide appropriate context, our computers can do the work of translation between design and development processes for us. This should remove a significant amount of work and confusion between teams and enable individuals and companies to focus on harder, more important problems.

The purpose of UI design

A software company’s purpose is to solve some problem for a user. An effective solution can be enabling new behaviors, making existing processes easier, reducing cost, or simply giving people something they enjoy. For any of those things to happen, a product needs to ship to an end user.
This maps directly to the purpose of any interface designer’s work - which is to compose an interaction layer for software that enables users to access solutions to a given problem set. If a user is unable to do so, the designer of that software is not done yet. Therefore, a designer cannot be done until the product has shipped.
As a note, I’m distinctly separating “interface design” and “illustration”. Illustrations often appear in interfaces or alongside products, but the processes and tools necessary for successful instances of each are enormously different. If you call yourself a UX designer, product designer, interaction designer, or something else where your output is meant to lead directly to the production of user-facing software, I’m putting your work in the bucket labeled “interface design”. If the final product of your work is an image, icon, illustration, typeface, advertisements, print media, or some other visual communication medium without native interactivity, I’m putting that work in the “illustration” category.

The problem with UI design tools

Every popular design tool available today is optimized for illustration. The tool’s marketing site and documentation might talk about the intersection of design and engineering, provide tutorials on how to compose design systems, and the artboards might come in common screen sizes, but they are ultimately optimized for drawing pictures. For proof of this, look no further than the toolbar of your current design tool:
Pen tools were designed for illustration. Shape layers were designed for illustration.
A common argument for the current model of design tools is that they allow you to directly manipulate layers to create quickly and accurately. For UI design tasks especially, this is a false assumption. They’re an imperative abstraction on top of a set of properties determined by inappropriate primitives - rectangles, ellipses, vectors - for the task that is required of them. Because of these primitives, the material output of design tools based on visual primitives will always be an image at best. This is great for doing icon work or spot illustrations in your UI, but adds significant friction to the process of building the interface itself.
A quick example: some design tools attempt to provide a basic description of your UI in code. The problem is that your tool thinks the button you designed is a rectangle with rounded corners, not a button, so any code it provides is only marginally useful - if at all.
The design tool’s code output is antithetical to how components are built in production. As a frontend engineer, there is no case in which I would paste this into my codebase. Some of the values are useful, but the property declarations don’t have enough context about the intent of the design to provide significant benefit to the programmers they’re built to serve.

What You See Is What You Get

For a long time, there have been WYSIWYG engineering tools that market themselves as design tools - Webflow, Dreamweaver, RapidWeaver, and Macaw are all examples of this. These are outside the scope of this conversation because they focus on code output for specific platforms with specific methodology. This kind of tooling is great for freelancers or solo designers who need to design, code, and ship by themselves (especially if they don’t have the benefit of being able to take the time to fully learn to code) but they don’t enable teams at scale or effective cross-platform design.

Common inefficiencies in design

UI designers love doing busy work and our tools are wonderfully effective at this purpose.
  • Manually nudging and meticulously adjusting alignment are inefficient methods of creating consistent spacing across an application.
  • Reorganizing groups, artboards, pages, and files rarely contributes to shipping.
  • Drawing flow charts to indicate user paths and routing is a massive timesink and the maintenance cost is significant.
  • Redrawing elements from an app after you find out your static design wasn’t accurate to production rendering is a repetitive process that leads to confusion on future iterations. 
  • Attempting to manage design systems manually across incompatible formats is a full-time job that always results in inconsistencies.
  • Iterating on designs without testing them with users lends itself to product decisions that are detached from the market’s requirements. 
  • Attempting to emulate system elements by building UI kits and sticker sheets is recreating work that is already done more accurately elsewhere.
We, as an industry, lose hundreds of thousands of hours per year on these mundane, tedious, ineffective tasks. We gain nothing new from them. We don’t get to work on harder, more important problems because we’re expending our time and effort on minutiae that feels like progress. All of this tedium is a direct result of the separation of design and development processes.

Getting on the same page

The goal of any UI design tool should be to provide visual controls to enable anyone to ideate and create interfaces quickly. To be more effective, we need data interoperability between design and engineering tools. If your UI design tool knows that it’s making UI for a given platform, you can build with elements that map directly to components in your codebase. Once there’s correlation between properties (whether directly or due to custom mapping), components can be passed back and forth freely between tools, effectively integrating the design and development processes into one cohesive, omni-directional unit.
This interchange system should be a simple declarative model based on UI-specific primitives (think button or input vs rectangle or ellipse) with only slight abstraction over the actual properties if any.
Example: I’m a designer working on a sign-in modal within an app for web (in React).
In the instance of the “Name” input above, a design tool sees a group called “Input - Name” that contains a rectangle and two text layers - one for the text value and one for the placeholder text. 
The React component sees an HTML <input> element with several inherent properties and states available to the programmer. Each state needs to be designed, but unless I go to the MDN page for each element and read through its API properties, I likely don’t understand the full scope of the work I’m setting out to do. Which means I’m really just designing the states I can think of off the top of my head.
On one hand, I’m effectively designing blind but for my own experience. On the other hand, the tools I use for programming tell me everything I want to know about what I’m building - which properties are available, the syntax for them, and they even try to autocomplete properties for me! These features help me to avoid mistakes and tell me what kind of data each element needs - but design tools have never had such a mechanism. 
What if there was a UIKit syntax plugin for your design tool that added new iOS-specific primitives so you no longer need to draw buttons by way of rectangles, but instead choose an actual UIButton in the toolbar, insert it into a View rather than an artboard and simply begin adjusting its actual native properties?  Your design tool could then:
  • tell you that the icon you’re trying to put inside your button should be 25px × 25px when you’re designing @ 1x (resolution probably doesn’t even matter anymore for most tasks) and automatically configure the export settings for the asset and push them to your team’s asset library
  • let you know that you still need to provide highlighted and disabled states for your button and then show you all your states side-by-side when editing them for easy reference
  • automatically apply default padding when you put your button into your UINavigationBar or automatically reconfigure your tab spacing when you decide it should go in the UITabBar instead
  • allow the button to accept a route property that points to another View in the app to navigate to when you click on the button
  • auto-generate a flow-map for your application based on route properties
  • handle routing and TabBar behavior when you decide you want to see how it feels. Maybe even compile a basic version of the interface so you can test it in Simulator or on your phone.
You as a designer become much more effective because your tool actually has some concept of intent.

Interface: a proposed intermediary format

An ideal interchange format would be a description of an interface as a data object that functions as a translation layer. This could be in an industry standard format such as JSON where values could be mapped to appropriate properties on a case-specific basis determined by which platform or language you are intending to ship to. Components can be described by diffing component and state instances against their default properties - essentially exposing component inheritance. By moving your source of truth to an open source, moderately-abstracted textual description, both sides get many benefits.
  • existing syntax definition packages could be used as a way to determine new sets of primitives
  • tool publishers can choose which properties to read and write enabling specialized tooling for specific purposes (e.g. prototyping tools could write animation or routing properties, while icon design tools could write asset references, and dev tools could write function or data API references)
  • references written by one tool could be used by another tool (e.g. data references provided by engineering tools could be read by design tools to ingest data from a production API, style properties could provide styling code to the engineering codebase)
  • generated code could be modified to support your team’s code formatting, modules, and components so output from this system could feel like code written by another team member
  • as a text file, it can live in the same version control repo as the rest of the application’s codebase
  • linting, testing, and formatters can be used to find errors and format code prior to committing to a repo
Here’s an idea of how I think it could be structured:
// interface.json
//tell your tools which components/syntax to load
"platform": ["iOS-UIKit", "myTeamDesignSystem"],
// UI components are defined by making declarative property changes against system primitives or imported from existing definitions (e.g. design system)
"components": [
row": {
"primitive": "UITableViewCell",
"properties": {
"padding": "16px",
"tint": "{myStyleGuide.palette.blue60}"
"children": [
"primitive": "UIImageView",
"properties": {
"aspect": 1,
"src": ""
"primitive": "myColumn",
"children": [
"kind": "text",
"type": string,
"data": "{{name}}"
"kind": "text",
"type": number,
"data": "{{phoneNumber}}"
favorite" : {
"primitive": "UIButton",
// application is defined as a data object with references to external sources
"application": {
"name": "Phone",
// routing and navigation can be defined at the root level (some will need to be defined at lower levels)
"root": {
"kind" : "tabs",
"screens" : ["favorites", "recents", "contacts", "keypad", "voicemail"]
"screens": {
"favorites" : {
"component" : "UITableView",
"content" : [
"component": "favorite_
"content": {
"name" : {myData.user[n].name},
"phoneNumber" : {myData.user[n].phoneNumber}
"route": "favorite_
"recents" : {
"contacts" : {
"keypad" : {
"voicemail" : {
If we can get our different toolsets talking to each other, we can work together more effectively with shared language and less overhead. Designers can have as much freedom as developers do to move between specialized tools to accomplish a purpose, without loss of work or fidelity. I believe this needs to be an open source solution so that tools can be built and maintained publicly though manufacturers could define new properties and adapters by means of vendor-prefixes if a property isn’t standardized.
This interoperability would hopefully do for design tools what it has done for engineering tools. Our tools should be better by being able to specialize while contributing to a central unit, thus allowing us to stop building the same massive feature sets as every other tool and instead specialize and move the field forward. You should be able to move from any design tool to any other design tool with minimal configuration.

Where we start

Right now, some features of this could be built on top of symbols and component systems in tools with plugin APIs. That would be hacky, but it’s possible. 
There have been attempts at similar interchange formats in the past, but they have traditionally been tied to a single manufacturer, so logically, this probably starts small with up-and-coming, community-driven projects and open source IDE plugins leading to successively larger tool publishers coming onboard and eventually organization of a core team or working group to manage an extensible standard if it gains traction. If you work on design tools or maintain syntax packages and plugins for IDEs and you’re interested in driving this forward, please get in touch. It’d be awesome to see the our industry make some huge leaps forward so we can focus on bigger problems.
This is by no means a finished document, but hopefully a starting point for a larger conversation that leads to real improvements for our industry and makes us all better at what we do. Please jump into the chat below and share with others!
Thank you to everyone who has helped inform and develop these ideas over the past year: Payam Rajabi, Sam Soffes, Diana Mounter, Max Schoening, May-Li Khoe, Adam Michela, Josh Puckett, Carmel DeAmicis, Sho Kuwamoto, Kris Rasmussen, Rasmus Andersson, Daniel Burka, Josh Brewer, Tom Moor, Kevin Smith, Marc Edwards, Drew Wilson, Danny Trinh, Brent Jackson, Adam Morse, Ben Wilkins, Jon Gold, Lucas Smith, Pasquale D’silva, Jake Marsh, Rylan Barnes, Vlad Magdalin, Linda Pham, Sergie Magdalin, Barrett Johnson, Koen Bok, Jorn van Dijk Brian Lovin, and Max Stoiber
Show previous messages

November 29, 2017 at 7:53am
I think that you have a very interesting idea. The only problem that I see is the number of stakeholders required to get together to make this work is going to make things hard. Perhaps a better approach would be to instead focus on the one thing that is not changing namely the user, and how they interpret the content on the page. Providing priority maps to indicate what users should see in what order. Context maps to provide clues as to what pieces of the puzzle should go together. If they are not together then the content makes less sense. With all of this you could then have a system which decides on the size and flow and orientation of the UI based on algorithms. You are then able to use the pen for its intended purpose generating static content. While the tiling system removes the pen in favour of a dynamic system. I should also add when I say algorithms I am not talking about code more forces that should apply to the tiles and scaling factors on each of the tiles. You could then also have inheritance of these properties if you really wanted to.

November 29, 2017 at 10:36pm
I think you're trying to solve an unrelated challenge, . I agree that it's complex and there are a lot of stakeholders, but that doesn't make it less important to solve.

December 4, 2017 at 3:33am
Would it be easier to start with the developer tools? iOS And Android each have UI builders. Could they be made more designer friendly?
Interestingly, I don't think they're particularly dev-friendly either. Do you know any devs who actually enjoy using Interface Builder?
That said, I still think the core of the problem is a lack of communication between specialized tools with Interface Builder being a prime example of trying to overextend a single tool beyond its value.

December 4, 2017 at 1:56pm
I don't know if they particularly like it, but they all seem to use it. We're trying to get our entire design team working exclusively in our prototyping tool with widgets styled to match our guide. It will be easier once the guide is done. The end result won't be usable code, but it will be documentation that calls out a button as a button. Our other goal is to only stray from those defined widgets when we absolutely have to.
Maybe it's environmental. Designers in South Florida aren't expected to be front end developers, and for the most part companies don't have enough engineers on staff to implement novel UIs with lots of animations. At least, not the ones I've spoken with. (except for Magic Leap their fancy office that I pass by)
Hmmm. I don’t think most iOS engineers use IB. Anecdotal, though.

December 5, 2017 at 12:20am
Great read ! Definitely resonated with me. The keyword is “mapping”. You’re basically creating a component/interaction API in the form of a JSON file that can be read/written to different tools. Define the model first (props, states, events etc), then work on the implementation across tools.
Your mapping proposal goes pretty deep (e.g. as deep as mapping presentation properties like padding etc). I think one of the challenges is going to be getting the design community onboard to that level initially. The industry is progressing but we aren’t quite there yet. Do you think there are steps we can take that gradually nudge the community towards this way of thinking? As an example, getting designers structuring and naming their document layers in a way that closely maps how it would be implemented. Your signup form example above depicts this well. If a modal component is called `<Modal>` in React, then the correspond Sketch layer should also be called `Modal` .
Next, could we disregard the presentation layer completely and instead focus on mapping components, their properties, states, content and events? Presentation could be handled manually at the implementation level. This would obviously lead to some inconsistencies between implementations but it would allow designers to continue using the tools they love for the time being. A very basic example is the concept of padding which Sketch has no understanding of.

December 23, 2017 at 2:01pm
To your point about "The problem with UI design tools" , here we are proposing a more interface oriented composition model.
Here are all the basic blocks we use to make even the most complex interfaces.
If you want to see how it applies to the actual interface, here's a quick example. On the left the preview of the interface and on the right, the composed representation of Views’ blocks.
Views core is open source. We are working on WYSIWYG design tools since we realised designers don't always feel motivated to work with code (we launched a version with JSON over a year ago, and another one with YAML). Finally, we created a new, clean syntax to make programming even more accessible for non-developers. I wonder what do you think of this approach?

December 23, 2017 at 8:45pm
I've been thinking of a very similar problem and would like to follow this thread and talk more when I have demos of my ideas to play around with.
Where else can I follow up on this discussion.

January 29, 2018 at 5:38pm

August 4, 2021 at 12:03am
I love this thread having the same problem. (see