Recently, Figma made headlines when it decided to pull its “Make Designs” generative AI tool from its platform after a user pointed out that the tool was generating designs that closely resembled Apple’s weather app. This raised questions about the company’s training methods, design systems, and quality control processes.
The controversy started when users noticed that the designs generated by Figma’s AI tool bore a striking resemblance to Apple’s weather app. This raised concerns about potential legal repercussions for users who inadvertently used these designs in their projects. Figma’s CEO, Dylan Field, was quick to address the issue and clarify that the tool was not trained on Figma’s content or app designs.
In response to the allegations, Figma released a statement on its company blog, acknowledging that there were issues with the design systems underlying Make Designs. Figma’s VP of product design, Noah Levin, admitted that new components and example screens were added to the tool without proper vetting, leading to the generation of similar assets to real-world applications.
Once Figma identified the problem with the design systems, they promptly removed the assets responsible for the similarities and disabled the feature. The company is now working on implementing an improved quality assurance process before re-enabling Make Designs. However, no specific timeline has been provided for when the feature will be reinstated.
Figma’s blog post also shed some light on the design systems that power the AI tool. The company commissioned two extensive design systems, one for mobile and one for desktop, with hundreds of components to guide the output of the tool. By feeding metadata from these components into the model’s context window along with the user’s prompt, the AI generates parameterized designs inspired by the examples provided.
Despite the setback with Make Designs, Figma continues to explore the potential of AI in design tools. The company announced other AI tools at its Config event, such as text generation for designs, which are still available to users. Figma has also outlined its AI training policies, allowing users to opt-in or opt-out of allowing the company to train on their data for potential future models until August 15th.
The controversy surrounding Figma’s AI design tool serves as a cautionary tale for companies looking to implement AI in their products. It highlights the importance of thorough testing, vetting, and quality assurance processes to prevent inadvertent issues like the one experienced by Figma. As the company works towards resolving the issues with Make Designs, it will be interesting to see how they incorporate user feedback and improve their AI tools in the future.
Leave a Reply