Image processing

Let AI assistants analyze images you send them in regard to your company's data

Introduction to Vision in Dust

Dust has introduced a new feature called Vision, enabling users to send images to Dust assistants for analysis using company data. This powerful capability opens up new possibilities for visual content analysis and brand compliance checking.

You can find it in the attachment icon which used to only accept text files (pdfs, etc) but now accepts images as well.

How Vision Works

With Vision, you can now:

  • Send images to your Dust assistants
  • Analyze images in the context of your company's data
  • Get detailed feedback on visual elements

Example Use Case: Brand Compliance Checking

Let's explore a practical example of how Vision can be used for brand compliance checking.

Setting Up

  1. Have your brand guidelines document available in your Dust datasources.
  2. Create a specialized assistant (like "BrandGuard") trained on your brand guidelines.
  3. Configure the assistant to analyze images against these guidelines.

Using the BrandGuard Assistant

  1. Capture a screenshot of the website or visual asset you want to analyze.
  2. Open Dust and select your BrandGuard assistant.
  3. Upload the screenshot to the chat.
  4. Send the message to initiate the analysis.

Analysis Process

The assistant will:

  1. Examine the uploaded image
  2. Compare it to the brand guidelines in its knowledge base
  3. Provide a detailed analysis of compliance and discrepancies

Example Output

The assistant might provide feedback on:

  • Color usage and differences from brand guidelines
  • Typography inconsistencies
  • Layout elements that don't adhere to standards
  • Recommendations for improvements

Potential Use Cases

Vision in Dust opens up numerous possibilities:

  • Brand consistency checks across digital assets
  • Product image analysis for e-commerce
  • Visual content moderation
  • Design feedback and iteration

Getting Started

To start using Vision with your Dust assistants:

  1. Ensure the modal you use has vision capabilities: Only GPT4o and Claude currently have it.
  2. Prepare relevant visual guidelines or datasets
  3. Create or modify an assistant to handle image analysis tasks
  4. Test with sample images to refine the assistant's performance

Remember, the effectiveness of your Vision-enabled assistant depends on the quality and specificity of the data it's trained on. Don't hesitate to iterate and improve your assistant's knowledge base for better results.

For any issues or questions while building your Vision-enabled assistants, please reach out to the Dust support team.