Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

Required fields*

Labs experiment launch: stackoverflow.ai

UPDATE - July 9, 2025

This week we’ve updated stackoverflow.ai with the following new functionality and fixes:

  • Related content suggestions are more relevant and dynamically re-ranked during conversational chat
  • Related content suggestions can now come from the Stack Exchange network, in addition to Stack Overflow
  • UX updates, improvements and fixes
  • All users now see the link on the left navigation menu, which is now “AI Assist”

Still pending release later this week:

  • Import chat history option allows for content from other LLM conversations to be imported via summary
  • Improved mobile accessibility for the chat experience
  • Built-in feedback mechanism

Many thanks to those who’ve provided feedback and shared thoughts on this post. Since the experiment now has a built-in feedback mechanism, if you are engaging with the experiment you can use that to provide feedback about your experience. General feedback remains welcome here.

Some of you asked very valid questions about who this experiment is targeting and whether this concept can be a new on-ramp into the community. The reason for this experiment is to explore the potential audience and entry path.

This experiment exists in the broader evolving landscape around LLM attribution. We acknowledge the concerns around attribution of the chat responses, and we are working to address this in the best way for the community and users.

mockup image of stackoverflow.ai


Continuing experimentation around themes of reaching and supporting technologists and smarter discovery, today (June 25, 2025), we're announcing a limited experiment on stackoverflow.ai, a new AI-powered search and discovery tool.

What is stackoverflow.ai?

We’ve experimented with AI-powered search and discovery before, so what’s different this time? Past concepts were RAG-based (Retrieval-Augmented Generation) and simply surfaced answers from Stack Overflow. The stackoverflow.ai experiment offers a model-agnostic generative AI tool, trained on knowledge from the broader web (including the Stack Exchange Network). As the user interacts with the tool, related content from Stack Overflow is displayed in the sidebar. This human-authored content from Stack Overflow is available as an entry point into the community and can help the user validate output from the genAI conversation.

The goal is to provide users with:

  • A new way to get started on Stack Overflow. The tool can help developers get unblocked instantly with answers to their technical problems, while helping them learn along the way and providing a path into the community.
  • A familiar, natural language experience that anyone who has interacted with genAI chatbots would expect, but further enriched with clear connections to trusted and verified Stack Overflow knowledge.
  • A user-friendly interface with conversational search and discovery.
  • A path, when the genAI tool isn’t providing the solution they need, to bring their question to the Stack Overflow community via the latest question asking experience, including Staging Ground.

This limited release is a first iteration to understand infrastructure viability, identify and fix bugs, assess core functionality, and gather initial feedback before considering opening it up to more testing and adding more functionality.

Additionally, this limited release will help us ensure our tools effectively detect and manage unrelated, inappropriate, or harmful content. It is important that we get this right, so if you get responses from this feature that are incorrect, harmful, or otherwise, inappropriate, please contact our team by using the "Contact" link here, selecting Trust and Safety under “What can we help you with?”, and selecting I have a concern with StackOverflow.ai.

Contact Support Dropdown Menu

What comes next?

Over the next few weeks, we will assess core functionality and gather initial feedback from the community and the randomly selected users and visitors using the feature.

Provided this early testing phase goes well, in July, we expect to add the following features and capabilities:

  • Import chat history - Developers can pick up right where they left off in another AI tool to get unstuck on stackoverflow.ai.
  • Related content suggestions from the Stack Exchange network, as well as Stack Overflow.
  • Dynamic re-ranking of the related content based on the ongoing genAI conversation.
  • The path to post a question directly to the relevant Stack Exchange site.
  • Additional ways to provide feedback and flag content on the genAI response.

This post is for bug reports and suggestions from users who have tried out the new interface, as well as for general feedback from the Meta community — how might this evolve to do more for developers, or for you?

Answer*

Cancel
9
  • 1
    I commented on our plans on another post about search re: irrelevant results. Commented Jun 25 at 20:15
  • 59
    @AshZade My concern is that you aren't actually providing proper attribution. I don't care about the relevance of your results because I'm not a user of this tool. I've rolled my own at home. My content however was ingested into the model and y'all promised me attribution. And yeah, I realize I'm being a jerk about it, but the company made a BFD about attribution when a lot of us questioned whether it was feasible. Commented Jun 25 at 20:17
  • 8
    @ColleenV chatbot seems to be just a wrapper around ChatGPT, so proper attribution is kinda impossible :( Commented Jun 25 at 22:05
  • 35
    @M-- I know, so I would either like to see the company fix their public statements or at least pretend they’re trying to live up to them by using a reasoning model or developing something that sort of looks like attribution. Right now, they aren’t even trying to provide attribution, and they didn’t say “we’ll do our best, but it’s a hard problem.” They said it was non-negotiable if the use of AI was to be ethical. Commented Jun 25 at 22:54
  • @ColleenV this does make sense. I agree that they shouldn't just sweep it under the rug. Commented Jun 25 at 22:55
  • 14
    I’m beginning to think ‘attribution’ was just a way to say “you must link back to SE if you want to use the data” to Google and a way to muddy the legal waters about the licensing just in case the wrong someone got pissed off enough to go to court. Commented Jun 25 at 23:03
  • 4
    Seeing what has been produced after this statement, it’s pretty clear they meant we are not supposed to negotiate about attribution. We won’t get it anyway. Commented Jun 26 at 8:48
  • 1
    I asked it to explain programming terms and give attribution to the training material used in the explanation, but it stubbornly refused and claimed it was impossible. Commented Jun 26 at 11:34
  • 7
    @Lundin I'm not prepared to say it's completely impossible, but I do think someone has to do the work to make it possible. Probably something has to be done during the training of a model to make it attribution capable. I feel very strongly that doing a search of the internet on keywords to try to return some relevant citations is not what "attribution" means. Commented Jun 26 at 15:30