🎯 What is it?

Your agents can now apply labels to emails and archive messages directly in Gmail. These new capabilities extend the existing Gmail tools, giving your agents more control over inbox management.

💡 Why is it useful?

Many workflows require not just reading emails, but organizing them too. Whether you're triaging customer requests, categorizing vendor communications, or cleaning up your inbox, agents can now take action automatically—no manual sorting required. Combined with Wake-ups (scheduled agent runs), you can fully automate inbox maintenance.

How does it work?

Agents with access to the Gmail tools can now use new actions to apply any of your existing Gmail labels and move emails to archive. These actions work alongside the existing Gmail capabilities your agents already have.

Concrete Use Cases

Here's how you could use it:

Automated inbox triage: Set up an agent that runs every morning via Wake-ups, reviews new emails, applies labels like "Urgent", "Follow-up", or "Read Later", and archives low-priority messages.

Customer support categorization: Have an agent automatically label incoming support emails by topic (Billing, Technical, Feature Request) and archive resolved threads, keeping your team's shared inbox organized.

Vendor communication management: Create an agent that identifies invoices, contracts, and purchase orders in your email, applies the appropriate labels, and archives them after filing the information in your system.

📈 Benefits for you

  • Save time: Eliminate manual email sorting and filing

  • Stay organized: Maintain a clean, well-labeled inbox automatically

  • Enable new workflows: Combine with Wake-ups for fully autonomous inbox management

  • Reduce noise: Archive processed emails so you can focus on what matters

🚀 How to access it?

The new label and archive capabilities require additional Gmail permissions. To enable them:

  1. Go to Personal Settings (bottom left of Dust)

  2. Find your Gmail connection and disconnect it

  3. The next time an agent needs Gmail access, you'll be prompted to re-authenticate with the updated permissions

Once re-authenticated, your agents will automatically have access to the new label management and archiving tools.

🎯 What is it?

Agents can now set their own wake-up schedules to resume work at a future time within an ongoing conversation. Think of it as giving your agent the ability to "set a reminder" for itself to check back on something, run a recurring task, or wait for a response before continuing.

💡 Why is it useful?

Sometimes work doesn't happen all at once. You might need to wait for someone to respond, check if a condition has changed, or simply run the same process every day at 9 AM. Until now, you'd have to manually come back and prompt the agent again. With wake-ups, the agent handles the timing for you automatically.

How does it work?

Agents have access to a wake-up tool that lets them schedule themselves to continue the conversation at a specific time or on a regular schedule (daily, weekly, etc.). Once scheduled, the agent will automatically "wake up" and continue where it left off.

Concrete Use Cases

Here's how you could use it:

Follow-up automation: Ask an agent to send an email to a colleague and check back in 2 hours to see if they've responded, then proceed with next steps based on their answer.

Recurring updates: Have an agent refresh a data dashboard every Monday at 9 AM, or check project status every Friday afternoon and send you a summary.

Real-time monitoring: Set an agent to check an external system every 30 minutes until a specific condition is met (like a deployment completing or a document being approved).

📈 Benefits for you

No more manual follow-ups or remembering to re-prompt your agents. You can now set up truly autonomous workflows that span hours, days, or weeks, with the agent managing its own schedule and keeping work moving forward without your intervention.

🚀 How to access it?

Wake-ups are now available to all users. Simply ask your agent to "check back in [time]" or "run this every [schedule]" and it will use the wake-up tool automatically. For more details on how to configure wake-up schedules, check out the documentation: https://docs.dust.tt/docs/wake-ups

🎯 What is it?

You can now connect Gong to Dust through our MCP (Model Context Protocol) integration. This allows your agents to access call transcripts and notes directly during conversations and workflows, without leaving Dust.

💡 Why is it useful?

Sales and customer success teams have valuable insights locked in Gong recordings. By connecting Gong as a live tool, your agents can pull context from customer calls on-demand, turning those conversations into actionable intelligence right when you need it—whether you're preparing for a meeting, writing follow-ups, or analyzing trends.

⚙️ How does it work?

Once connected, Gong becomes available as a tool that your agents can use. When an agent needs information from a call, it queries Gong in real-time and retrieves the relevant transcript or notes to inform its response.

Concrete Use Cases

Here's how you could use it:

Pre-meeting preparation: Ask an agent to "Summarize the last 3 calls with Acme Corp" and get instant context before your next meeting.

Customer insight synthesis: Create a workflow that pulls key objections or feature requests from recent calls and compiles them into a weekly report.

Follow-up automation: Have an agent draft personalized follow-up emails that reference specific points discussed in the most recent Gong call.

📈 Benefits for you

  • Instant access: No manual searching through Gong—your agents retrieve exactly what they need

  • Better context: Agents can reference actual customer conversations to provide more relevant responses

  • Time savings: Eliminate copy-pasting between tools and streamline your workflow

🚀 How to access it?

This feature is available to all workspaces. Check out our documentation to set up the integration: https://docs.dust.tt/docs/gong-mcp

Note: This is a live, tool-based integration. If you're looking to synchronize transcripts into Dust for semantic search across all your data, use the Gong connector instead.

🎯 What is it?

You can now type / directly in the conversation input bar to quickly access and add capabilities to your conversations. A searchable dropdown appears instantly, listing all available skills and MCP tools, which you can filter and select using your keyboard or mouse.

💡 Why is it useful?

Previously, accessing capabilities required navigating through the toolbar, which could interrupt your workflow. This new slash command feature streamlines the process, letting you stay focused on the conversation while quickly adding the tools you need—similar to how modern text editors and communication tools work.

How does it work?

Simply type / in the input bar, and a dropdown menu appears with all your available capabilities. Continue typing to filter results using fuzzy matching (you don't need to type exact names), then select what you need with your keyboard (arrows, Enter, Tab) or mouse. Press Escape to close the menu.

Concrete Use Cases

Here's how you could use it:

  • Quick data analysis: Type / then "calc" to instantly find and add calculator or data processing tools without breaking your thought process

  • Adding specialized skills mid-conversation: Type / then start typing a skill name like "research" to quickly enable research capabilities when you realize you need them

📈 Benefits for you

This feature saves time and keeps you in flow. Instead of moving your cursor to the toolbar, you can access everything through keyboard shortcuts, making conversations with agents faster and more efficient—especially valuable when you're working through multiple tasks quickly.

🚀 How to access it?

The feature is available to everyone right now. Just type / in any conversation input bar to try it out.

🎯 What is it?

OpenAI's latest model, GPT 5.5, is now available on Dust. You can select it when building custom agents or use it directly through the global agent interface.

💡 Why is it useful?

GPT 5.5 represents OpenAI's newest advancement, offering improved performance over the previous GPT 5.4 model. This means better reasoning, more accurate responses, and enhanced capabilities for your agents.

Concrete Use Cases

Here's how you could use it:

  • Enhanced analysis agents: Build agents that handle complex research, data analysis, or strategic planning with improved reasoning capabilities

  • Improved conversational agents: Create customer-facing or internal support agents that provide more nuanced and accurate responses

📈 Benefits for you

Access to cutting-edge AI technology means your agents can deliver higher quality outputs, handle more sophisticated tasks, and provide better assistance to your team.

🚀 How to access it?

  • For custom agents: Open the agent builder and select "GPT 5.5" from the model dropdown menu

  • For quick tasks: Use the global agent which now runs on GPT 5.5 by default

🎯 What is it?

OpenAI has released GPT Image 2, a new image generation model that's now the default for all image creation on Dust. This model brings significant improvements in image quality, detail rendering, and the ability to accurately generate readable text within images.

💡 Why is it useful?

Previous image generation models often struggled with two key challenges: editing existing images effectively and incorporating clear, readable text into generated visuals. GPT Image 2 addresses both of these limitations, opening up new possibilities for creating professional-quality images that include precise text elements—something that was previously difficult or impossible to achieve consistently.

⚙️ How does it work?

GPT Image 2 is automatically used whenever you generate images through Dust agents or workflows. The model excels at understanding complex prompts, maintaining high fidelity to your specifications, and rendering fine details with unprecedented precision.

Concrete Use Cases

Here's how you could use it:

Marketing Materials: Generate branded images with company names, slogans, or product descriptions clearly visible and professionally rendered—perfect for social media posts, presentations, or campaign materials.

Data Visualization Enhancement: Create infographic-style images with charts, labels, and annotations that include actual readable data points and explanations, making complex information more accessible.

Image Editing & Iteration: Take existing images and ask your agent to modify specific elements while preserving the overall composition—useful for refining visual assets or creating variations of existing designs.

📈 Benefits for you

You can now generate more professional, production-ready images directly within your Dust workflows. The ability to include clear text eliminates the need for post-processing in external design tools, saving time and streamlining your creative processes. The improved editing capabilities also mean you can iterate on images more efficiently.

🚀 How to access it?

No action needed—GPT Image 2 is already the default image generation model on Dust. Simply continue using image generation in your agents as you normally would to automatically benefit from these improvements.

🎯 What is it?

Agents can now proactively pause and ask you structured questions mid-conversation when they need clarification. Instead of guessing or making assumptions, they'll present you with single or multi-select options to help guide their next actions.

💡 Why is it useful?

Until now, when an agent faced ambiguity, it would either guess (sometimes incorrectly), take a random path, or get stuck trying to interpret unclear instructions. Now, agents can simply ask you directly—turning uncertainty into a quick, interactive exchange. This prevents wasted time, reduces errors, and makes conversations feel more collaborative and intuitive.

⚙️ How does it work?

When an agent needs input to proceed, it will pause and present you with a clear question and predefined answer options (single-choice or multiple-choice). You select your answer, and the agent continues with exactly the information it needs.

Concrete Use Cases

Here's how you could use it:

Research workflow: You ask an agent to "analyze our competitors." Instead of picking arbitrary companies, it asks: "Which competitors should I focus on?" with options like [Company A, Company B, Company C, All of the above].

Content formatting: You request a report summary. The agent asks: "What format do you prefer?" with options like [Bullet points, Paragraph form, Executive summary].

Data prioritization: You ask for insights from multiple sources. The agent clarifies: "Which data sources should I prioritize?" offering [Internal reports, Public data, Customer feedback, All sources].

📈 Benefits for you

  • More accurate outputs: Agents work with your explicit input instead of assumptions

  • Faster resolution: No back-and-forth to correct misunderstandings

  • Better control: You guide the agent's direction at key decision points

  • Richer interactions: Conversations feel more natural and collaborative

🚀 How to access it?

This feature is available by default for all agents on Dust and Deep Dive, and works seamlessly in Slack conversations as well. No configuration needed—your agents will automatically ask questions when they need clarification.

🎯 What is it?

Workspace admins can now configure Dust conversations to be private by default. When this setting is enabled, conversation URLs are only accessible to participants—anyone else who tries to access the link will see a 404 error, ensuring the conversation's existence isn't even revealed to non-participants.

💡 Why is it useful?

This feature addresses the risk of accidentally sharing sensitive conversations through URLs. While Dust conversations are powerful collaboration tools, sometimes conversation links get shared unintentionally (in Slack, email, or screenshots). With private-by-default URLs, you get an extra layer of protection against accidental leakage while maintaining flexibility when you explicitly need to share.

⚙️ How does it work?

When a workspace admin enables this setting, all new conversations automatically become private—only people directly participating can access them via URL. Participants can still use @mentions to invite others to join conversations, and if needed, anyone in the conversation can flip it back to "accessible to workspace members" from the conversation menu.

Concrete Use Cases

Here's how you could use it:

HR and sensitive discussions: When discussing performance reviews, salary negotiations, or confidential employee matters, conversations stay strictly between participants even if someone accidentally copies the URL.

Strategic planning: When working on confidential product launches, M&A discussions, or competitive analysis with a small team, you can ensure the conversation doesn't leak to the broader workspace if a link is shared out of context.

📈 Benefits for you

  • Enhanced security: Reduce the risk of sensitive information leakage through shared URLs

  • Peace of mind: Know that conversations stay private unless you explicitly choose to share them

  • Flexible control: Keep the ability to make specific conversations workspace-accessible when collaboration requires it

  • Maintain collaboration: @mentions still work seamlessly, so you can invite people without compromising privacy

🚀 How to access it?

Workspace admins can enable this feature by navigating to Admin → Workspace Settings and toggling the private-by-default conversation URLs setting. Once enabled, all new conversations will be private by default, while participants can override this on a per-conversation basis from the conversation menu.

🎯 What is it?

You can now import skills directly into Dust from a GitHub repository or by uploading a .zip file from your computer. This new capability gives you more flexibility in how you manage and deploy your skills across your workspace.

💡 Why is it useful?

If you're managing multiple skills or working with a team that maintains skills in a version control system, you previously had to manually copy and paste code into Dust. This new import feature allows you to centralize your skills in GitHub (or any other repository) and keep them in sync with your CI/CD pipeline, ensuring your Dust agents always use the latest versions.

How does it work?

Simply provide a GitHub repository URL or upload a .zip file containing your skill files. Dust will import the skill structure and make it available in your workspace.

Concrete Use Cases

Here's how you could use it:

Development teams maintaining shared skills: Your engineering team maintains a repository of company-specific skills (e.g., data formatting, internal API integrations). When you update the skill in GitHub, you can quickly re-import it to Dust to keep all agents synchronized.

Distributing skills across workspaces: You've built a powerful skill for market research and want to deploy it across multiple Dust workspaces (different departments or clients). Export it once as a .zip and import it wherever needed.

📈 Benefits for you

  • Version control: Keep your skills in Git alongside your other code, with full history and collaboration features

  • Automation: Integrate skill updates into your existing CI/CD workflows

  • Portability: Easily share and duplicate skills across workspaces without manual copying

  • Consistency: Ensure all team members are using the same version of your custom skills

🚀 How to access it?

This feature is available to everyone. When creating or updating a skill, look for the new import options that allow you to specify a GitHub repository URL or upload a .zip file.

🎯 What is it?

Steering transforms how you interact with Dust agents by allowing you to send messages while the agent is working. You can now see every step of the agent's work in real-time—thinking, tool calls, searches—and redirect it on the fly without canceling or losing progress. Additionally, messages are now scoped to one agent at a time, displayed in the input bar, so you no longer need to use @mentions.

💡 Why is it useful?

Previously, conversations with agents were strictly turn-based: you'd send a message, wait for the complete response, and only then could you course-correct if needed. This made it difficult to guide the agent early when you saw it heading in the wrong direction, and you had no visibility into what was happening behind the scenes. Steering solves this by letting you shape the output as it's being built, not after it's done.

How does it work?

As soon as an agent starts working, you'll see live updates of each step it takes. If you notice it's going off track or you want to add context, simply send a new message—the agent will incorporate your input immediately without restarting from scratch.

Concrete Use Cases

Here's how you could use it:

Research and analysis: You ask an agent to research a topic, but after seeing the first few searches, you realize you need a different angle. Instead of waiting for the full response, you immediately steer it: "Actually, focus on European regulations instead."

Data exploration: An agent starts querying multiple data sources, and you see it's pulling from the wrong database. You send a quick message to redirect it before it completes unnecessary work, saving time and getting accurate results faster.

📈 Benefits for you

  • Save time: Redirect agents early instead of waiting for full responses you'll need to regenerate

  • Better control: Guide the conversation dynamically based on what you see happening

  • Full transparency: Understand exactly what the agent is doing at each moment

  • Smoother workflow: No more @mentions needed when working with a single agent

🚀 How to access it?

Steering is now available to everyone automatically. Start a conversation with any agent and try sending a follow-up message while it's working. For complete details and examples, visit the full documentation.