MIT Researchers Develop New AI Tool to Customize 3D-Printable Models With Ease

This tool allows users to customize 3D models without compromising the functionality of the fabricated objects.

Researchers at the Massachusetts Institute of Technology (MIT) have developed Style2Fab, a generative-AI-driven tool that allows users to customize 3D models with ease.

According to TechXplore, this tool will allow designers to add personalized design elements to 3D models using natural language prompts and without compromising the functionality of the fabricated objects.

MIT Researchers Develop New AI Tool to Customize 3D-Printable Models With Ease
Researchers at the Massachusetts Institute of Technology (MIT) have developed Style2Fab, a generative-AI-driven tool that allows users to customize 3D models with ease. Gerd Altmann from Pixabay

All About the Style2Fab

Faraz Faruqi, a computer science graduate student and the lead author of the paper presenting Style2Fab, highlighted its significance for less experienced designers. He noted that it simplifies the process of stylizing and printing a 3D model, providing a learning opportunity in the process.

Style2Fab employs deep-learning algorithms to automatically separate the model into aesthetic and functional segments. This innovation streamlines the design process, making it more accessible to a broader user base.

Beyond facilitating novice designers, Style2Fab holds potential in the field of medical making. MIT research indicates that considering an assistive device's aesthetic and functional features increases the likelihood a patient will use it.

However, clinicians and patients may lack the expertise to customize 3D-printable models. Style2Fab addresses this gap, allowing users to customize assistive devices to match their preferences without compromising functionality.

The project began with an in-depth study of objects available in digital design repositories. The researchers sought to understand the functionalities within various 3D models. This knowledge was crucial for effectively using AI to segment models into functional and aesthetic components.

The researchers identified two distinct functions of 3D models: external functionality, which governs the parts interacting with the outside world, and internal functionality, which concerns the components needing integration after fabrication.

Segmentation of Models

Style2Fab employs machine learning to scrutinize the model's structure and discern the frequency of geometric alterations. This process results in the segmentation of the model.

These segments are cross-referenced with a dataset containing labeled indications of functionality and aesthetics. It enables Style2Fab to distinguish the functional components.

After the segmentation process, users have the option to input a natural language prompt detailing their desired design attributes. Subsequently, an AI system named Text2Mesh endeavors to craft a 3D model aligning with the user's specifications. This is achieved by modifying the aesthetic segments.

In a study involving makers of varying experience levels with 3D modeling, Style2Fab demonstrated its usefulness. Novice users found it easy to understand and use for stylizing designs. For experienced users, it streamlined their workflows and offered more fine-grained control over stylizations, according to the team.

Looking ahead, the researchers aim to enhance Style2Fab's capabilities. They seek to provide users with control over physical properties and geometry. That includes considering factors like an object's load-bearing capacity when altering its shape.

They are also exploring the possibility of enabling users to generate custom 3D models from scratch within the system. In fact, the team is collaborating with Google on a follow-up project.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics