Lessons learned from working on Carbon for IBM.com

In February 2021, I was fortunate to join the leadership team as the Design Lead of Carbon for IBM.com design system.  While our goals are connected to consistency and brand expression, our success as a team has far exceeded those categories. We have successfully refined our team’s operation and workflows, delivering seven-fold growth in adoption within ten months. It took me some time to digest this tremendous experience. Here are my reflections on what might have worked and why.

Some context about the Carbon for IBM.com design system

Our design system supports specifically the needs of IBM.com, IBM’s corporate website with over 20 million pages built on two legacy design systems, with hundreds to thousands of globally distributed owners.

Our team’s objectives and challenges are shared by many design system teams. We want to encourage teams adopting the system to reuse as much as possible, but these teams often face aggressive timelines and are tempted by customization and quick hacks. There is a healthy amount of executive pressure on us to not become a bottleneck to others, and to continue to drive consistency and beautiful brand expression in what hits the glass. 

We had to aim for the success of all parties :  the adopters of the system need to ship; the stakeholders need results, and the system team needs a healthy pace to run the distance. A design system is a large investment — a bet on efficiency through reuse.

Creating the system is only half the battle. Sustainable operation, successful adoption, and observable value return are my focus today. 

Lesson 1: a project, a product, a service

Most would acknowledge today that a design system is not a one time project, but a product or a program that requires ongoing maintenance. These labels don’t highlight the importance of design system adoption efforts. For a system to be adopted at scale, the design system team has to operate like a service provider. 

Let’s start with self-service, which should have required minimum attention from the design system team. The Carbon for IBM.com design system is open-source so we have a public GitHub repository, and we maintain a public website for usage documentation. Our sprint plans and work-in-progress designs are available on ZenHub and Box, so all IBMers can view. These decisions provide transparency and enable a large number of adopters to self-serve on getting status updates on their requests. However, those familiar with Service Design would be quick to point out these are only a surface experience, and a lot of backstage work is needed to achieve it. In order to arrive at a delightful and seamless service delivery, an end-to-end, surface-to-core perspective has to be applied to the operation. Otherwise, GitHub issues can be out of date, and Box can be a labyrinth of nested folders, and the purpose of self-service through transparency would be lost.

Example of publicly available project status information and usage documentation

 To make sure we are on top of every touch point with our adopters, we formed a “fire line” support team, which is a roster of one designer and one developer assigned for the sprint to watch our Slack and GitHub activities. These are our “first responders” and can close out a good portion of the inquiries by leveraging their own knowledge and connections. When we realized increased adoption created more implementation questions, we added one more developer to the fire line to increase response time and make the workload more manageable.

The Carbon for IBM.com leadership team meets twice a week to triage new issues in GitHub. We use a collaboration method internally referred to as the “four-in-the-box” model, where design lead, development lead, product owner, and project manager get together, and align on decisions, next steps, scheduling, and move the issues down the pipeline. This method helps us closely monitor and respond to the touch points with the adopters, so that the surface experience is an efficient and reliable one. To keep these meetings efficient, fire-line designers and developers fill in any “blanks” that may have come up during triage.

Troubleshooting for adopters

Aside from GitHub, we have other channels of connection with adopters, including weekly office hours. Office hours are hosted by the design system team and open to anyone in IBM interested in the design system. IBMers are invited to come with questions, work in progress, blockers, or anything. In some parts of IBM, there’s often only one designer on the team, and they can come to us during this weekly time slot for design feedback, grow their skills, and join a virtual design community.

We’ve created Away Missions where a Carbon for IBM.com team member joins an adopter team on a short-term engagement project. This has been especially fruitful in activating adopter teams’ developers in learning to use the system. For high-priority adopters, such as the Adobe Experience Manager team in charge of creating page templates, the leadership team has bi-weekly or monthly check-ins to see if they are on track with their adoption and address any blockers. 

These benefit our work in a lot of ways. Beyond facilitating the product teams’ adoption progress and building up a good relationship, we also became frequent witnesses to serendipitous encounters. A frequent occurrence in our office hours is when team X comes with an ask, and we can show them that team Y already has a similar request in the pipeline, and it often fits their needs.

I observed how these strategies made every team member more aware of our role in the bigger picture — to serve our adopter teams so they can succeed. The design system teams connect the dots between product teams, create alignments, and boost efficiency while playing a supporting role. Our role in governance comes second. When pushback on requests are needed, for example when the request conflicts with IBM’s design and content standards, it is done through the leadership team, with clear documentation. In the end, the win for us is that the system got adopted, the components got reused, and IBM’s public digital storefront got a little bit tidier.

Lesson 2: prioritization, process, and pace

The more services we offer, the more others depend on us. This can go to a dangerous extreme. The design system team members have to take care of themselves and make sure we don’t drown in meetings. This quickly became my number one concern after I joined the team—after all, we still have a whole library to build and maintain. The design system team members’ day-to-day lives should not be always at the mercy of adopter teams’ deadlines and their executives.

Prioritization is a must. Understanding how a request relates to the current business focus is essential, and there are many helpful prioritization methods out there. However the bigger challenge, in our experience, is putting a time frame on this prioritized backlog so it is not all for “some time in the indefinite future.” An estimated delivery date is a basic requirement for good service. 

We found that leaning into a transparent and robust process of working helps to create this estimated time frame, and make our adopter teams understand some very important things:

  1. Things take time. We are not miracle workers.
  2. Our quality is high because we never skip important steps such as QA.
  3. They can help too!

With these understandings in place, we can delay or push back on requests we are unable to fulfill without alienating adopters, expand our capacity by encouraging contribution, and really pace ourselves for the long run.

Below is a high-level overview of our workflow. Steps colored magenta are work primarily handled by designers; blue steps are handled by developers, and black-colored steps are  handled by the leadership team. (You can find detailed step descriptions in the Appendix.)

Depending on the type and scope of work, only some of these steps might be needed:

A bug might only take us four steps to close it out. A typical feature request engages the whole team, but some design and dev work can happen concurrently.

The best part is external teams contributing! Below is an example of when we partnered with the AEM template team on some of their feature requests. We were able to let go of the wheel and play an advisory role, allowing the whole ship to sail faster:

We encourage design and code contributions

The workflow feels hypothetical until it becomes part of a release schedule. Keeping in mind that the goal is to arrive at a transparent and robust process ensures success for all parties, so a timeline is really helpful. 

Here’s how we arrived at a timeline with some important dates to observe:

We practice the standard two-week sprint. Below is the calendar for our version 1.20 to 1.22 releases. We quickened the release pace by one week so our adopters can get the changes faster. 

Calendar for our v1.20 to v1.22 releases, where release date is in dark blue, and code freeze lasts for a week.

Too often, release cycles are discussed as a developer-only activity. This is false, and unhealthy. This schedule has significant impact on a design team’s workload and pace, so here are the additional relevant dates when designers’ work are taken into consideration:

  1. Design and specs freeze is three weeks before the next code freeze. It’s pencil down time. This gives the dev team a reasonable amount of time to implement by code freeze.
  2. Pull request submission date has to be at least three days before code freeze to allow time for the design team to do design QA. Designers can block a merge if the implementation is below standard from a design perspective. Thanks to Browserstack Percy, visual review has become very straightforward. Conscientious developers can even run and check Percy results before submitting the PR to reduce design QA cycles.
Dates that requires design attention — in addition to code freeze — are PR submission date and design and spec freeze date

Putting the release cycle together with the workflows, it becomes easy to see whether expecting a feature request by a particular date is realistic, or to plan ahead by backtracking the days to give time for every necessary step. Below are a few hypothetical examples; in reality, one can expect a bit more gaps between the steps as the individual is often handling more than one request or bug per release. 

Hypothetical example timelines. In reality, one can expect a bit more gaps between the steps as the individual is often handling more than one request or bug per release.

The team is constantly looking at possible ways to be more efficient. Our testing period, for example, has already become much shorter since last year thanks to automation. For more, check out Automating a design system from our Development Lead and Architect Jeff Chew.

Lesson 3: observable value return 

Pick a key performance index (KPI) that resonates with executives, but be mindful of the inevitable limitation of simplifying a complex reality into a single number. 

For Carbon for IBM.com, our KPI is the amount of pageviews. We track the page view of pages built with Carbon vs the pages built with legacy design systems to see the amount of adoption. Pageviews as a metric is more accurate to measure adoption than the more straightforward number of pages. IBM.com has over 20 million pages, but a small portion of the pages garner over 80% of the total traffic. Pageviews’ direct connection with traffic makes our impact clear and observable. 

We started the year with 6.2% of all pageviews attributed to pages built with Carbon. By the end of November that same year, 44.8% of page views of all pages on IBM.com were attributed to us. Meanwhile, the page views attributed to the deprecated design system Northstar has been steadily reducing from 54% to 21%, suggesting the increase in page views we got is not just from new pages, but also because page owners are successfully migrating their pages over.

Bar chart showing the change in page views of Carbon built pages between February and November

This result carries some caveats. Instrumentation has limitations. It’s easy for the tool to decide whether a page is using Carbon for IBM.com as a dependency, but hard for it to know how much––never mind how well––the page is using the system. It is very likely a page out there is getting counted but is only using one of our components, or worse, has broken every usage rule. It’s hard to deny the necessity of a KPI, but hopefully we will all become a little bit wiser if we keep up a healthy habit to scrutinize every number.

Closing thoughts

Yesterday, I came across the Sales organization’s upbeat monthly letter, filled with actions and demands: “Make the move! Close the deal!” And I was instantly thankful that, being on a design system team, we don’t have to “win,” generate revenue, or beat out competition — at least not directly. 

Although we had to prove our impact in other ways, we are here to serve. It is only one website we have influence over! But this thought relaxed my nerves. There’s power in thinking that if we do our work right, we can make everyone a winner. 

Meet the team

The above practices, workflow, and metrics were very much in place before I joined the team, and credit goes to the present and past leads who created them and matured them over time: Linda Carotenuto, Jeff Chew, and Wonil Suh and Roberta Hahn. It is an incredible experience to be working alongside these extremely smart people.

Appendix: Description of steps

Triage. An eye icon in a circle. Colored black, magenta and blue.

1. Triage

Determine if this is the right work to do and when is a reasonable time for the team to tackle it. This is done by PM, squad leads, and fire-line team members. Our fire-line team is made up of representatives from the design and development team, and the team members rotate every sprint.

Design. A pencil icon in a magenta circle.

2. Design

Discovery, research, and rounds of design explorations. This is when designers dive in and do competitive research, talk to stakeholders and adopter teams to better understand the end user’s expectation, gather data and metrics, explore solutions, get reviews and critiques, iterate, and finally identify a solution. The whole shebang. 

Spec. A magnifying glass icon in a circle, colored magenta and blue.

3. Spec

There are many tools out there to help with red lining or hand off to development, but as a step in the process, it still requires attention. This is when the designers and developers stare at the same thing together, and exchange notes and patch what’s missed. We keep both detailed visual and functional specs. These documents help immensely with triaging bug tickets later.

Code. Icon showing </>. Inside a blue circle

4. Code

Developer (picks up keyboard, 2 seconds later): ✨Tada! 🎉 Is this what you want?

Designer: Nice, almost there! Just a few—

Developer: Did this take two months to design?

Designer (swallows in dry throat): …(then breaks down into inconsolable sobbing.)

Test. Icon showing lines and checkmarks inside a blue circle

5. Testing

Testing starts after Code freeze, and could take a while to complete. This is to ensure the changes on the code behave as expected on other browsers and devices. We use a fair amount of automation to speed it up and increase coverage. In addition, we do visual regression testing with every code merge. There is front-loaded work to write these tests, and then we update when necessary. For details on how our awesome robot army helps with end-to-end testing, and keeps up with upstream and manage releases, checkout Automating a Design System from our Development lead and architect Jeff Chew.

Documentation. Book icon inside magenta and blue circle.

6. Write documentation

After handing off the work at the end of Step 3. Spec, designers begin documenting the intent and usage of the component or feature for our adopters. This means updating the design system website. If it is done before the dev team is able to complete the code, the updated documentation can be a pull request approved and ready, waiting to merge at the next code release. Developers also have technical documentations to write, which can happen concurrently with design documentation. 

Design kit. A document with sketch icon at the corner, in a magenta circle.

7. Design kits

There are usually design assets created already by this point. But these are not easily found or distributed. Especially in the case of a new feature or a new component, it has to be added to our design kits to be delivered to adopters. Usually this requires rebuilding the thing to ensure the Sketch symbols or Figma components are reusing foundational elements.

8. QA

Never skip the QA, even for small bug fixes. This is done by both designers and developers. If you remember from above, we have some automated testing done. One of these is the visual regression testing done using BrowserStack Percy. Percy creates snapshots of the new build and puts it against the old build snapshots for a comparison. Any differences are highlighted and shown as a fail. These visual differences need to be approved by a designer. 

We have a process to distribute PRs to team members to review, and a set of standardized review instructions. Realizing reviewing PRs is actually a lot of work, we also started tracking time spent on PR reviews by creating a ticket every sprint with flexible story points. 

9. Tooling 

We are working on a governance tool, called Beacon, designed for evaluating pages for compliance with the design system. It can be used by product managers and stakeholders, or adopter teams to self-evaluate their adoption maturity. Again, with the evolving library, the evaluation criteria in Beacon needs to be frequently updated. There is design input, such as what violation is severe and worth a fail. At the moment this is primarily a developer task.

10. Release code

At last, the work that passes QA and regression testing is ready for release. At the moment our release is scheduled every three weeks. This could change depending on the adopter team’s needs and state of the library’s growth. We have pretty much fully automated this step, and it is managed by the dev team.

Get started

See how Knapsack helps you reach your design system goals.

Get started

See how Knapsack makes design system management easy.