-256

Today (December 2, 2025), AI Assist’s conversational search and discovery experience is now fully integrated into Stack Overflow. AI Assist continues to provide learners with ways to understand community-verified answers and get help instantly. The enhancements released today mean that logged-in users can benefit from their saved chat history to jump back into the flow and pick up where they left off, or share conversations for collaborative problem-solving. This update also allows us to explore further integrations in the future, as explained below.

The story so far

AI Assist was launched as a beta in June 2025 as a standalone experience at stackoverflow.ai. We learned a great deal through observing usage, having discussions with community members on Meta, getting thoughts from various types of users via interviews and surveys, and reviewing user feedback submitted within AI Assist. Based on this, the overall conversational experience was refined and focused on providing maximum value and instilling trust in the responses. Human-verified answers from Stack Overflow and the Stack Exchange network are provided first, then LLM answers fill in any knowledge gaps when necessary. Sources are presented at the top and expanded by default, with in-line citations and direct quotes from community contributions for additional clarity and trust.

Since our last updates in September, AI Assist’s responses and have been further improved in several ways:

  • At least a 35% response speed improvement
  • Better responsive UI
  • More relevant search results
  • Using newer models
  • Attribution on copied code
  • Providing for the reality that not all questions are the same
    • Depending on the query type, AI Assist now replies using one of 4 structures: solution-seeking, comparative/conceptual, methodology/process, or non-technical/career.
    • Every response also has a "Next Steps" or "Further Learning" section to give the user something they can do.

What’s changed today?

While logged in to Stack Overflow, users of AI Assist can now:

  • Access past conversations as a reference or pick up where they left off;
  • Share conversations with others, to turn private insights into collective knowledge;
  • Access AI Assist’s conversational search and discovery experiences on the site's home page (Stack Overflow only).

Example responses from AI Assist Example responses from AI Assist

AI Assist at the top of the Stack Overflow homepage (logged-in) AI Assist at the top of the Stack Overflow homepage (logged-in)

Conversations can be shared with others Conversations can be shared with others

What’s next?

By showcasing a trusted human intelligence layer in the age of AI, we believe we can serve technologists with our mission to power learning and affirm the value of human community and collaboration.

Research with multiple user types has shown that users see the value of AI Assist as a learning and time-saving tool. It feels aligned with how they already use AI tools and there is value in having the deeper integrations. Transparency and trust remain key expectations.

Future opportunities we’ll be exploring are things like:

  • Context awareness on question pages: making AI Assist adapt to where the user is
  • Going further as a learning tool: help users understand why an answer works, surface related concepts, and support long-term learning
  • Help more users learn how to use Stack Overflow: guide users on how to participate, helping them meet site standards

This is not the end of the work going into AI Assist, but the start of it on-platform. Expect to see iterations and improvements in the near future. We're looking forward to your feedback now and as we iterate.

49
  • 10
    The screenshot for sharing conversations is misleading, it suggests that the mechanism could be vulnerable to enumeration. The actual mechanism uses a UUID though, not a plain, incrementing number. Commented Dec 2 at 14:51
  • 72
    Where is a post, where someone asking for this? Oh, AI hype, I got it, we must have it here because.. well, no reasons. I don't use "random" assistants there and there. I have preferred one, which is way better than any website can offer me for free. Thanks. Commented Dec 2 at 15:33
  • 35
    I couldn't be less happy SE is getting in on the LLM game. Is there a way to opt out of being used for the hallucination engine or do I just need to delete all my answers? Commented Dec 2 at 15:39
  • 19
    @AshZade Will deleting chats also revoke SE's license to use the content of the chats? Commented Dec 2 at 15:57
  • 34
    So, how does this work if there's a ban on AI in questions and answers? Is it a ban on everyone else's AI, but SO AI is "fine"? That just sounds like more corporate hypocritical BS to me. Commented Dec 2 at 16:55
  • 19
    Nah. I like AI tools, and I don't think the integrations are bad from the outset– but I feel that adding yet another widget to the top of the home page which pushes more human questions out of view is a terrible compromise that I feel is wholly unacceptable on a site that proports to value human contributions above AI ones. The focus just feels so wrong to me, and I think having no way to hide, relocate, or even minimize that pane is just ridiculous. Commented Dec 2 at 17:23
  • 9
    @AshZade I don't think that's unreasonable, I just still don't like the overall message it seems to send, it doesn't sit well with me. I'd also posit that if the clickthrough on those questions is poor, then the homepage has an identity crisis that should be tackled wholistically... why is the clickthrough poor? Is it a relevancy problem? Would users prefer seeing something else other than questions? The AI box may very well be part of that solution, but I'd assert that it's not the whole picture. I haven't gotten the sense that Stack, or its users, really have a cohesive vision for that page. Commented Dec 2 at 17:50
  • 13
    Straight out of the "how can we make Stack Overflow crappier this quarter?" playbook. At least the Stack Exchange dictator is consistent. Commented Dec 2 at 23:56
  • 44
    It creeps and it creeps and it creeps. Why should human experts again spend their time answering questions here? Commented Dec 3 at 6:15
  • 6
    "our mission to power learning and affirm the value of human community and collaboration" since when? Did I missed some announcements? Commented Dec 3 at 7:06
  • 14
    @GSerg I'll answer in earnest: since we launched the Alpha, +80% of AI Assist's usage has been technical questions. It's one of the strongest reasons we continued the project. I commit to sharing detailed data in the next few weeks as we collect it. I have mentioned this point in past meta posts: the sentiment here does not reflect the usage of AI Assist overall. Commented Dec 3 at 12:56
  • 35
    Why, precisely, are we trying to get users to ask fewer questions? Commented Dec 3 at 16:41
  • 19
    Looking for information about how to disable and delete all the AI chat features from my view of the service. I'm here to interact with other developers, not an LLM. Commented Dec 3 at 17:37
  • 6
    Why is this announcement on meta.SE when the feature is only on SO? Shouldn't it be on meta.SO? Commented Dec 3 at 20:47
  • 14
    Its not often that you see someone so eagerly digging their own grave as SO does. Commented 2 days ago

46 Answers 46

178

It's confusing to see Stack Exchange spend so much of their teams' time delivering this feature, when it’s clear to me that there is no desire for such a mechanism amongst the community - this is especially apparent given the beta announcement from this summer is (as of this writing) the 22nd lowest-scoring non-deleted question on Meta Stack Exchange of more than 100k+ total questions here.

From my perspective, this is not solving an articulable problem or providing any sort of value-add for the average user. Speaking for me personally:

  • If I'm looking to AI for assistance in answering a question I have, there is much more value for me in using my IDE's built-in tools to interface with something like GitHub Copilot - if, for no other reason, that its responses can be contextualized using my project's existing codebase.
  • Where the feature appears in the homepage UI is extremely intrusive and pushes the actual content of the homepage further down, which feels very counterintuitive. It only adds to the unnecessary clutter that was added last year.

Like many others, I am not interested in using this feature or having it further clutter my homepage. How do I opt out or otherwise turn this feature off?

22
  • 1
    Thanks for the feedback. Re: value in the IDE, we agree. This is the start, not the end of AI Assist. The current scope is for scenarios like where users hit a wall in their IDE and search or look for help. We know from our surveys and analytics that it's a need that hasn't gone away. Our future does include access in the IDE. Commented Dec 2 at 15:38
  • 48
    @AshZade Thanks for the prompt response. Any chance you can address my question emphasized above about how to deactivate this feature (short of using an adblocker on the associated DOM elements)? Commented Dec 2 at 15:44
  • 4
    @esqew blocking the input field and the div above with ublock origin seems to work for now. But yes, I would also like a button in the settings that permanently hides all the LLM trash... Commented Dec 2 at 15:46
  • 36
    @AshZade better make some backup plans right now, because if I understand the OP correctly this is going to be added to the question pages in the future... if so, the backlash might be bigger than SE expects, and a "hide this forever" button might get you at least a tiny bit of community goodwill. Commented Dec 2 at 15:51
  • 51
    @AshZade That's rather disappointing to hear, and unfortunately predictable given Stack Exchange's similar decisions in recent past. I hope the product team will reconsider this stance. Thankfully, uBlock will do the trick just the same. Commented Dec 2 at 15:53
  • 51
    I've created a feature-request for this AI Assist to be toggled by users: Allow people to opt out of AI Assist. Let's see if that's a viable feedback route. Commented Dec 2 at 15:56
  • 62
    For fun, I asked "How do I disable this?", to which "AI Assist" said "What do you mean by 'this'?". So I rephrased to "How do I disable AI Assist on Stackoverflow", to which it responded with "Generative artificial intelligence (a.k.a. GPT, LLM, generative AI, genAI) tools may not be used to generate content for Stack Overflow. Please read Stack Overflow's policy on generative AI here." Pretty ironic, but ultimately, useless. "We don't have plans to provide toggles turn off any AI Assist" - Personally, I really wish you did. For now, I'll use 3rd-party tools to hide this. Commented Dec 2 at 16:25
  • 30
    @TimLewis Speaking of irony... Announcement 1: " AI Assist is now available on Stack Overflow" Announcement 2: "Policy: Generative AI (e.g., ChatGPT) is banned". Browsing the site is like using some sort of Kafkaesque mental illness simulator... Commented Dec 3 at 9:14
  • 6
    … you know what they meant… why be purposely antagonizing towards people providing feedback Commented Dec 3 at 17:55
  • 2
    @AshZade Given the feedback already received on this Meta post, you know full well what dmh means when they say "opt-in vs opt-out"... Also, I just got a message in my Stackoverflow inbox that when clicked, takes me to the AI Assist chat page. I did not opt-in to that, but here we are. Commented Dec 3 at 18:24
  • 29
    "How do I turn this off" is not written with large enough font. This is crucial question. I don't want to have anything to do with AI Assist and I don't want to be reminded of its existence, and I want to receive spam notifications about it in my inbox even less. Commented Dec 3 at 19:05
  • 5
    @AshZade "This is the start, not the end of AI Assist." Alas... Commented Dec 3 at 21:16
  • 2
    @AshZade no one is being forced to use the site, and right now the easiest way to get rid of the ai assist is to not engage with the site. Is that the story you want? Commented 2 days ago
  • 2
    They could… just as they could have continued asking questions here after ChatGPT was released, just as people “could” ignore getting downvoted and come back and continue to participate after their first bad experience on SO, there’s a lot of things people “could” do, but why would people who want a solution like ai assist come to so for it when every mainstream widely available solution is already trained on so and has none of these guardrails? SO gave up the goods on this being a useful solution ages ago. Now it’s just a low quality copy pasta. Commented 2 days ago
  • 4
    @ashzade You've been getting feedback about this overhyped Clippy you're insisting on forcing down everyone's throat, and the overwhelming response from pretty much everyone who cared enough to respond was "No thank you, we hate this, this is a bad idea, don't do it" and all you staffers ever came back with was "Well, we're gonna do it anyway". Commented yesterday
100

A screenshot of my Stack Exchange inbox, showing a notification advertising the new AI Assist feature

Why did I need to be pinged about this? I don't recall ever being pinged about new Stack Exchange features before. Was this announcement post not enough? Were you hoping to notify people about the AI Assist without them needing to come here and seeing just how overwhelmingly the community is against it? The fact that the notification takes me directly to the AI Assist, rather than this announcement, makes me feel like that's the case.

It really doesn't help the feeling, that's been building over the past few years, that you're only pretending to listen to our feedback on new features, and then forcing them onto us anyway, no matter how much we say we don't want or need them.

8
  • 29
    just did a quick scroll and in 8 years of message history, never got a notification for a "new" feature Commented Dec 3 at 16:51
  • 32
    This is absolutely unacceptable use of Inbox messages. Commented Dec 3 at 18:54
  • 3
    If you would say that you would like the new feature, then they would listen to you. The ping is just advertisement, nothing mean about this. Just a way to get attention of people at maybe the long term cost of alienating those who are too annoyed by the notification ad. Commented Dec 3 at 19:10
  • 6
    MSO: AI Assist notification is SPAM Commented Dec 3 at 20:10
  • @cottontail Apparently that's a known glitch that happens if you click a notification too quickly or something. Commented 2 days ago
  • I was excited that maybe someone gave me an upvote or wrote a comment. Instead it was an advert for AI slop. :'( Commented yesterday
  • Even worse, it's a junk feature that nobody likes. Like those crap ads that you want to move to spam immediately. Commented yesterday
  • "I don't recall ever being pinged about new Stack Exchange features before" - they did it before about the customisable chat room guidelines Commented 20 hours ago
100

Attribution, where art thou?

A vast majority of Stack Overflow users want their posts read and be useful to other users. If Stack Overflow itself has an assistant feature at the top that supposedly searches through posts but never show where it got its answer or just gives irrelevant posts that puts off the searcher, Stack Overflow will lose both askers and answerers IMO.

If AI Assist on Stack Overflow is not grounded in Stack Overflow posts or does not surface Stack Overflow posts or has to rely on the underlying LLM (that used Stack Overflow posts in pre-training) to generate a response that rephrases Stack Overflow answers without attribution, what is the point?


I'll give an example.

Case 1: No reference

If it is truly searching through Stack Overflow posts, then it should include links to the actual Stack Overflow posts it used, but it doesn't. The question below: "how to make a function a member of an instance of a class in Python" was already asked and answered on Stack Overflow 14 years ago. In fact, if you look at AI Assist's answer, the code is identical to the accepted answer there. OpenAI's model has definitely seen this Q/A. So shouldn't AI Assist show this answer as a supporting document? Apparently not.

no attribution

Case 2: Irrelevant reference

If an explicit reference is asked (note that the question is otherwise identical to above), a supporting post is given but it doesn't answer the question. Also the AI Assist's generation doesn't seem to reference the retrieved post either because the generated solution is different from the one in the linked post.

wrong post

12
  • 66
    By the way, when I copy with attribution, the attribution references the AI assistant, not the source post it referenced. Are you joking? Commented Dec 2 at 19:10
  • which code block included attributing AI Assist? We don’t have attribution for the AI content. Commented Dec 2 at 20:31
  • “ Also the answer doesn't seem to reference the retrieved post either because what the answer says is different from the linked post.” Using the shared convo, the quote matches the source content for me. Commented Dec 2 at 20:33
  • In first example, it searched and did not find anything relevant. I’m looking into the second example. Thanks for sharing both. Commented Dec 2 at 20:39
  • 4
    @AshZade The quote matches the linked post but that post is not about what I'm asking about. The assistant answer is answering my question though, which means the assistant cannot have sourced the answer from the quoted text. Commented Dec 2 at 20:55
  • Ah ok, I was worried it was miss-associating quotes. If SO/SE don’t have relevant content, AI Assist will still answer but an AI generated answer from its model’s training data. The second example is behaviour we don’t want and we’re looking into it. Commented Dec 2 at 20:57
  • 14
    @AshZade This SO post answers this question. In fact, there's an answer there that OpenAI probably used in its training data. The fact that SO AI can't find it is worrisome. Commented Dec 2 at 21:41
  • 2
    The contents from the SO post and your shared chats don't match. I don't see the connection between it and AI Assist's AI response. BTW, thank you for sharing the examples and talking me through what you're seeing. I really appreciate it. Commented Dec 2 at 21:43
  • 6
    @AshZade The linked post clearly refers to the same solution as the AI tool. I'm not a dev, so maybe I'm just missing how they're different but it looks like the answers seem to refer to a similar solution. Yes, the content isn't identical - so not "quoted" - but if the goal is to point users to human answers on SO, it seems odd that the existing SO answer/s aren't being referenced. Does the tool only reference recent answers on SO? This one is from 2011. Commented Dec 3 at 16:02
  • 2
    This was disappointing. I custom tailored my query so that I knew which questions it would have to reference to give a (hopefully) correct answer. The information was in its response - but no proper attribution. Commented 2 days ago
  • 5
    "A vast majority of Stack Overflow users want their posts read and be useful to other users." That would include me. I joined the site to interact with other humans, and the reputation system is an important part of that. Plagiarizing and showing users my answers without attributing them to me means I won't get the reputation reward I might deserve from helping others. I came to SO to share, collaborate, and get recognized for my effort, not get ripped off. Commented yesterday
  • 2
    The AI is also happy to help you writing plagiarized answers by stealing content from SO. You just have to pinky-promise that you won't share it with anyone. stackoverflow.com/ai-assist/shared/… Great tool for plagiarism and spam! Commented yesterday
54

Slowly and bit by bit AI services are taking StackOverflow over. It's okay as a product, the usual LLM without much attribution plus a bit of handpicked content from the past. But it's also pushing the human interactions on this site to the side, more and more and more. This will decrease, not increase new questions and even more so new answers, unfortunately.

I want my contributions to be seen and read by other humans. I don't want my contributions to be used by a machine and maybe a little bit of attribution given somewhere. I especially don't want the AI service to be placed right on the site where I might contribute content, right above that content. At least I don't want to contribute for free to such an arrangement.

StackOverflow missed the chance in the last couple of years to be a strong voice for user generated content from users for users. Instead it looks increasingly like any LLM service, with a preference for a single data source and with a database of questions from the past. I wish more energy would have been invested in making asking and answering easier, like duplicate detection, question improvements, .. The AI generated content service has the top spot now.

To also give some specific feedback: I tried to search for the stopwatch function in Julia, i.e. this question with the same title. My input was: "How to implement a stopwatch function in Julia?". It didn't find anything and produced an AI generated answer without any attribution.

And it's still quite slow I would say. Something common like "How to format a string in Python?" takes 12s to start printing and 25s to finish (and links to a zero scored, rather unhelpful answer). Other LLM services need 2s, search engines <1s.

1
  • 4
    Thanks for sharing that specific example. I've passed it to our search team to look into. If I use the post title exactly, it's used in the answer. I would expect it to be returned with the input you provided as well. Commented Dec 3 at 13:45
53

Why does it always end with "If this didn't help, you can ask your question directly to the community." ?


The input placeholder shows "Ask me anything", so I asked "How can I bake a cake without eggs?"

It gave me some weird results but still ended with

Next Steps

  • Try a simple wacky cake recipe (search "wacky cake eggless recipe") and compare texture.
  • If you want, tell me what kind of cake (flavor/texture) you want and I can suggest a specific eggless recipe and measurements.

If this didn't help, you can ask your question directly to the community.

Even though baking cakes is not on-topic, it still blindly links the user to Stack Overflow's ask page.

8
  • 12
    Also, we might ask, "why" the SO's AI Assist suggests to search "waky cake eggless recipe" ? It should only recommend searching for stuff that exists in Stack Overflow. Commented Dec 2 at 16:58
  • 7
    It always ends with that line because we want encourage participating in the community. We take users to the SO ask page because the high majority of AI Assist usage has been for SO topics, but we plan to make the button dynamic to route to the right community or communities where there may be more than one. Commented Dec 2 at 17:51
  • "baking cakes is not on-topic" Maybe you want to write software to run an oven and need some inside knowledge? Commented Dec 3 at 6:20
  • @AshZade for those who do use the widget, how does their engagement on the site change, outside of using the widget? Commented Dec 3 at 14:53
  • 1
    @user400654 that's a great question. Given we just launched this yesterday, we don't know yet. My hypothesis is that they will spend more time on site and be served more content from SO than if they used traditional search. We'll share our findings as we collect data and can make more confident inferences. Commented Dec 3 at 14:57
  • 1
    @AshZade it was launched in June. Commented Dec 3 at 14:59
  • 1
    Sorry, I thought you meant the home page component. I got mixed up. Unfortunately the answer is the same: because it was hosted off-platform on stackoverflow.ai, we had no way to connect it to users on SO to track and understand behaviour. Now we will be able to. Commented Dec 3 at 15:00
  • 7
    Shouldn't the first thing the AI does is determine if the question is on-topic? Commented Dec 3 at 19:22
39

A question for Stack Overflow (the company):

Do you believe that using GenAI is at all meaningful?

If your answer to that question is 'yes', then here's some free AI consulting for you, provided by AI Assist:

Q: If re-designing the Stack Overflow home page, would it be a good idea to put an AI prompt on top of the main page and move the "Ask Question" button out to the far right?

AI-generated answer

Short summary
No—not without careful testing. An AI prompt and moving Ask Question to the far right each have trade-offs that could harm clarity, discoverability, and community workflows.

Key trade-offs

  • Discoverability vs. distraction
    Putting an AI prompt at the top makes AI features obvious but can distract or steer newcomers away from asking canonical, well-formed questions.
  • Primary action hierarchy
    The Ask Question button is a core action. Moving it to the far right reduces visual prominence and likely lowers question quality/volume.
  • Trust & transparency
    Prominent AI entry points must clearly state scope, limitations, and provenance to avoid users trusting incorrect answers.
  • Community norms
    Emphasizing AI could change posting behavior and moderation load (more low-quality or AI-sourced posts).

Source: https://stackoverflow.com/ai-assist

Conclusions:

  • Even your new feature itself thinks that it was badly designed.
  • Even GenAI knows the site and products better than you do.

EDIT:

Illustration of the user interface given to the Ask Question button:

stackoverflow.com: enter image description here


stackoverflow.com/questions: enter image description here


stackoverflow.com/specific_tag: enter image description here


25
  • 2
    Now I'm confused. Is this post a proof that we need more AI (because it's so smart) or a proof that we need to be more cautious (because its answer says so)? Commented Dec 3 at 7:58
  • 25
    @NoDataDumpNoContribution It's rather in the category of: even a child could tell that this is a bad idea. Commented Dec 3 at 8:01
  • 1
    Another poster flagged this as well and I commented that we used usage data to make the call to add it there and the homepage is in scope for the redesign project. The gist is that there was very little engagement for the components we moved further down. Commented Dec 3 at 12:55
  • 18
    @AshZade There was very little engagement for the Ask Question button? We might as well close the site down then. Commented Dec 3 at 12:58
  • 1
    I understand the point you're making but they are not similar in usage or value. I'll repeat what I put in the other comment: the homepage has very little usage to begin with and the links on it even less. The majority of traffic to posts is through search results (often with filters), the question listing page, or mostly directly to individual Q&A posts via search engines. We're experimenting adding this component there and the homepage is part of the redesign project. Expect it to change in general to improve its engagement across the board. Commented Dec 3 at 13:06
  • 7
    @AshZade I updated the answer with screenshots. Now if anyone can explain the rationale over this arbitrary placement of the Ask Question button aka the main input channel to the whole Q&A site, I would love to hear it. Commented Dec 3 at 13:40
  • 1
    Is your question "why is the Ask Question button where it is on the homepage vs the question listing page for a given tag?" Commented Dec 3 at 13:43
  • 10
    @AshZade Rather: why did someone just sloppily toss it aside at a whim to make room for the useless AI prompt? And why is the button location all over the place depending on which part of the site I happen to browse? What is the design strategy here? As I have said before, the company needs to hire professional graphical designers for these things. Commented Dec 3 at 13:48
  • 6
    @AshZade Ok so if I just heard of stackoverflow.com and want to visit the main page to ask a question for the first time, the button must be hidden away out in the periphery along with the blog spam, to ensure that I ask my question to the AI rather than use the site as intended. Commented Dec 3 at 13:56
  • 8
    With this rationale, you may just as well remove the "Ask Question" button entirely from the home page. Commented Dec 3 at 14:00
  • 6
    However, lets ignore those dumb humans and their common sense. What matters here is that AI Assist doesn't like the new design, as evident from the output quoted above. This leaves us with one of the following two possible outcomes: 1) AI Assist is wrong and a bad tool not to be trusted 2) AI Assist is correct and this design was poorly considered. Do we go with 1) or 2) here? Commented Dec 3 at 14:22
  • 3
    “We used usage data” just feels so weak. Of course usage data from people who self select into using a thing will be generally positive, because only the people who have a positive experience will continue to use it. What positive effect has this has on the network, outside of 285k people using this widget in the past 7 months and it being used “6,4000”+ times a day? Why is that a good thing for the network or the community? Commented Dec 3 at 14:58
  • 3
    @user400654 A lot of sensible people will have opted out of tracking cookies and misc spyware indeed. Probably SO specifically got a heavy bias towards opting out, when the user base is approximately 100% programmers and similar tech-savvy types. So the usage data will be from the aimless cookie monster type of users, who always accept all cookies without blinking. Commented Dec 3 at 15:04
  • 2
    @Lundin "the button must be hidden away out in the periphery along with the blog spam, to ensure that I ask my question to the AI rather than use the site as intended" - remember when I was assuming so much bad intent that the poor, virtuos company had to tell me that they totally didn't intend give to hide the Ask Human behind the Ask AI feature? Need a reminder?... "Assuming"... yea, because these changes apparently aren't a clear enough indication of the real intent. Commented Dec 3 at 16:07
  • 2
    @user400654 "“We used usage data” just feels so weak." No, it doesn't. Usage data can be misleading especially without controls or some kind of behavior model behind it, but at least it's data. More than can be said of many other things. My criticism would be that this argument is applied only selectively. If usage data were an important factor, pet projects of the company (including but not limited to articles, discussions, ...) should have died much earlier than they actually did. They only sometimes go for usage. And what if usage now plummets even further? Close everything down? Commented Dec 3 at 16:51
33

On my first visit:

enter image description here


On to more serious matters (coming from a non-SO, mostly non-technical user):

  • AI Assist isn't meant to compete with those tools [like ChatGPT], it's meant to improve search & discovery experience on SO via natural language, chat interface
    (Taken from a comment.)

    Then why does it ask me what I'd like to learn today, giving the impression it's capable of teaching me things? Can this be changed to something more well-suited, like "how can I help you find what you're looking for?"?

  • By showcasing a trusted human intelligence layer in the age of AI, we believe we can serve technologists with our mission to power learning and affirm the value of human community and collaboration.

    Yes, injecting what I believe is emphatically non-human 'intelligence' into things is a good way to affirm the value of human community and collaboration. Like tax forms in Christmas gift wrappings. Promotional, empty phrases like these makes my eyes roll. You're introducing information to your communities, so please keep it honest and clear.

    What is this "trusted human intelligence layer" exactly? What's "honest" about it?

  • Moving on from my answer to the former announcement, it seems it can now access the entire network. Thanks, that's a nice upgrade for all of us "technologists"!
    However, putting this to the test, the answer to my first practical question is quite useless:

    Q: How can I protect artwork on paper?

    AI: Binder clips + a small protective sheet behind the clip are low-cost, reversible, and minimally invasive—good for rentals.

    And some additional advice on how to hang paper stuff, which is not what I asked for. It found an answer on DIY, while I expected and hoped to trigger it to find useful information on Arts & Crafts.

    Is this indicative of its capabilities, or is it just focused more on the "technologists" among us?

  • A new aspect of this AI assistant has to do with conversations, and I rarely use chat: what exactly are those conversations? Do they include comment sections beneath questions and answers? Is it this AI's strength to collect disparate comments from different sections on the site based on a query?

12
  • 1
    "Learning" is intentional because historically Stack users not only want answers, but want to and appreciate learning. We've designed AI Assist to enforce this via its responses. We're continually improving the search portion of AI Assist to return the best results from the community. Thanks for sharing. I'm not sure what your question re: conversations is. Conversations are what each sessions of AI Assist is called. Commented Dec 2 at 17:35
  • Thanks, @AshZade. Sure, but I'm not saying it's not what people want to hear, just that it's not really what the AI Assist can help with. It's unclear communication, like some other things I pointed out. And aha, the conversations part now makes complete sense and I feel embarrassed I didn't pick up on that :| Commented Dec 2 at 18:07
  • 1
    no need to be embarrassed at all. AI Assist can be used to learn. It doesn't just return search results. You can ask about how things work, how to get started, trade-offs, etc. It may be semantics re: what we think "learning" is, but it's not designed to "just give the user the answer". One big reason users like chatbots is that they're conversational where they respond naturally and users can refine the conversation so they can get what they want from it in a way they understand. Commented Dec 2 at 18:10
  • "Then why does it ask me what I'd like to learn today". Instead of "What would you like to learn today?", how about: Where do you want to go today? Commented Dec 3 at 7:54
  • 2
    @Lundin That's similarly confusing, which is quite typical for adspeak. For a car or airline company it makes sense, not for an aggregator tool. Commented Dec 3 at 10:35
  • 1
    @AshZade That it doesn't just give the answer is not the problem I have with it, rather the opposite: it's only collecting information. It helps one learn as much as looking up a Q&A oneself, or reading an article on Wikipedia. But Wikipedia itself provides said information, while this tool does not; its purpose is not even to help you learn, because it can't structure information properly or tailored to the user, its purpose is to quickly go through and recap pre-existing information that would otherwise take a while to find. "What can I help you look for?" would be more appropriate. Commented Dec 3 at 10:44
  • 3
    @Joachim My point: if you work in IT and didn't live underneath a rock in the 90s, you wouldn't pick something that sounds just like Microsoft's old, bad sales blurb from the 1990s. Unless you want the user to associate your product with old bad Microsoft products... Commented Dec 3 at 11:12
  • "...what exactly are those conversations..." To add to other answers. The idea is nowadays that you can refine results of a search by giving additional feedback. Basically you create different more and more extended versions of your search phrase that hopefully converge to what you wanted to get. Commented Dec 3 at 11:26
  • @NoDataDumpNoContribution Ah, right. "Prompting is Hard: Let's go do Our Own Research". Commented Dec 3 at 11:32
  • 1
    @AshZade Can you elaborate on the "trusted human intelligence layer", by the way? Commented Dec 3 at 11:33
  • 1
    @Joachim on this point "it can't structure information properly or tailored to the user", that's exactly what we're trying to do with how we structure the response with the sections like "tips & alternatives, trade-offs, next steps". They're tailored for learning. The "trusted human intelligence layer" is very wordy but it is to do with prioritizing SO/SE content (where it exists) and the LLM supplementing it. Commented Dec 3 at 14:15
  • 1
    @AshZade So that 'layer' is what the Assist finds? And since what it finds is based on human intelligence (i.e. written by humans—for now, at least) and is trusted (because it prioritizes answers based on votes?) the team came up with that phrase? As for the "tailored" part: the Assist is tailored for finding solutions quicker, sure, but not "to the user" (I'm definitely nitpicking here). Where does it find those additional tips, actually? It doesn't seem to take them from the community answer, so is it searching the web, as well? Commented Dec 3 at 14:30
32

If AI Assist is linked to from non-SO sites, then it should be relevant for those sites and understand what votes are

I am a heavy user of CrossValidated, the statistics Q&A site under the SE banner, and that site now has a prominent "AI Assist" link in the left hand sidebar, underneath the (utterly site-specific) "Home", "Questions" and "Unanswered" links, and above the (again utterly site-specific) "Tags" and "Saves" links.

So when I saw that link in this specific place, it seemed obvious to me that by clicking on it, I would be sent to something like https://stats.stackexchange.com/ai-assist or something like that... at least somewhere that would be specific to statistics questions. Y'know, basic UI and such?

Well, no. The link goes to the general https://stackoverflow.com/ai-assist/ URL.

So, just for kicks, I asked it for "Any thoughts on the Mean Absolute Percentage Error?" Because we have a kinda well-voted question on this topic at CV: What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?

Unfortunately, AI Assist apparently does not read CV, or badly so, because in its answer it gave one link to a question with zero upvotes on Math.SE (Mean Absolute Percentage Error), and to a different question on CV with, again, zero upvotes (Absolute Average deviation in percentage calculation) as of right now.

Might it be possible that AI Assist does not quite understand the concept of, y'know, votes in the SE system?

3
  • 3
    if it can be of any consolation, the bot is perfectly able to answer question about any topic on the network. I have personally tested it can reference content from Arqade and Anime&Manga for example. Now, getting actual correct answers is another issue itself, and I kinda proven that the bot does not understand what questions it shouls filter out and more importantly that the filter should come BEFORE sending the results to the users... Commented Dec 3 at 19:07
  • 2
    @ꓢPArcheon: yes, but then again, LLMs will inherently "answer any question" and rarely say that they have no idea, which they should do far more often. If all AI Assist finds on a topic is two questions that are almost ten years old and have zero upvotes in a system that has always relied crucially on voting, then in my book it should preface its answer with a disclaimer that it did not find any sources that the community considered authoritative. Commented 2 days ago
  • no, I meant that the LLM training isn't limited to SO and a few more tech sites - it has info about any question on the network, including non tech sites like Anime. It quoted text from an Anime question so those are indeed in the training set. As for the "answer any question" - I think that in some of the older experiments the bot configuration has some basic config to try to prevent question outside SO scope to be answered. It seems that was removed, probably because the long period plan is to enable the tool on all the network. Commented 2 days ago
25

I won't add to comments on the usefulness or quality of AI Assist.

But I can say it's out of character with the network, and I believe SE is "quiet quitting" the community to become another chatgpt frontend.

For a Q&A system let’s be honest about just how huge of a change this is to the format. And this is amplified by the fact that the contributors are so against it. It not only changes how information is exchanged, it will be impossible to have constructive discussions with the community about it’s refinement over time. Think about what message that sends to those contributors that made SE what it is today. Its a hostile and alienating change.

This has always been a network of humans asking questions, and humans answering them. AI Assist now places human "answerers" in competition with another type of answerer: AI.

And it's not a fair competition when AI is pinned to the top of the page and offers plausible-sounding quantity over quality. If SE expects this to just happily co-exist with human answers, ... how? Where are the hero callouts explaining how the world's top experts are here at your fingertips in this community? Well SE doesn't care about the community unfortunately. This will drift one way or another, and given the trajectory, will turn SE into humans asking questions and getting AI answers at maximum throughput. Bravo, a chatgpt frontend.

So why is it out of character? (off the top of my head...)

  • Most obvious: Official policy has banned generative AI. And that policy has been useful and well-received. AI Assist not only reverses that course completely, but promotes it to the top of the page. Make it make sense, considering the rigor that has gone into creating policies tuned for this Q&A format.
  • Even if attribution works perfectly, it demotes SE from an often primary source of knowledge to secondary. What used to be the definitive source now is reduced to aggregate slop
  • Reputation is an important feature of the site; not a zero sum but at least it's comparable and thresholds mean something as we compare human against human. Adding a giga-answerer that is effectively a black hole of reputation feels more like a cancer than a feature.
  • SE has spent years refining workflows and rules carefully tuned to interactions between askers & answerers. Introducing this basically clobbers a bunch of this consideration. And will the AI participate in these meta discussions? Or is SE expecting the community to carefully advocate for AI Assist in shaping policy. I'm not saying that change is bad in general, but this is a flag worth noticing.

IMO SE is just "quiet quitting" the community.

4
  • 20
    The sad thing is that, in a world full of LLM-generated content, Stack Exchange could have been a bastion of actual knowledge vetted by experts. But instead of playing to its own strength, it's trying to beat the AI companies at their own game - and failing. Commented 2 days ago
  • 1
    @S.L.Barthisoncodidact.com That bastion is Wikipedia. If only there was a Q&A driven Wikipedia with votes. Commented yesterday
  • 1
    @NoDataDumpNoContribution Let's hope Wikipedia will hold out against the onslaught of LLM-generated nonsense! That said, I believe votes are a poor curation mechanism. I once dreamed of building a knowledge base using symbolic AI, where curation would be done by fixing the ruleset. Of course, you'd still need to get the experts to agree on what the correct rules should be... Commented yesterday
  • 1
    Better than wikipedia; SE is often a primary source of knowledge Commented yesterday
19

Attribution on copied code

What does this mean?

14
  • 2
    When we include SO/SE content that includes code, the "copy" button includes attribution as code comments. It's the same feature that was released about a month ago meta.stackexchange.com/questions/414573/… Commented Dec 2 at 15:19
  • 42
    @AshZade I hope you are aware that this is not sufficient as proper attribution when presenting the code. Not all users will copy the code in the first place, or use the copy button if they do so. Any code provided by SO users must be properly attributed directly in the LLM output. Providing attribution only in one specific use case (copying) done in a specific way is nowhere near sufficient. Commented Dec 2 at 15:23
  • 2
    @AshZade Attribution to what? The answer that the code came from? What about code that is not taken verbatim from a post, but has been slightly modified by the LLM: who's that attributed to? Commented Dec 2 at 15:30
  • 2
    @wizzwizz4 the source: url, poster, and licensing. Give it a try. We don't modify any content coming from SO/SE at all. Commented Dec 2 at 15:32
  • 3
    @wizzwizz4 "What about code that is not taken verbatim from a post" - not sure if that can happen in the case the LLM finds something on SO. But for the "AI" mode it defaults to when it doesn't find something suitable on SO, the answer is simple: there is no attribution at all, even if the answer is based on training material from SO or even if it directly regurgitates SO content. Commented Dec 2 at 15:35
  • 2
    @AshZade l4mpi's latest comment provides a clearer explanation of the background for my follow-up question. Commented Dec 2 at 15:37
  • 4
    Ah, it's about attribution from the LLM output? We only attribute content we retrieve from SO/SE. We're continually evaluating LLMs and their capabilities, and will implement attribution from their outputs when they provide it. Commented Dec 2 at 15:40
  • 28
    @AshZade quoting Prashant: "Attribution is non-negotiable", "All products based on models that consume public Stack data must provide attribution back to the highest-relevance posts that influenced the summary given by the model". I guess now that this left "alpha" and is on the main page, these quotes are proven to be a lie? Because the "AI" mode is using a LLM that incorporates SO content but does NOT provide any attribution. Commented Dec 2 at 15:43
  • 7
    @AshZade They never will provide it. I've been trying to talk to various decision-makers about alternative approaches that can (which could slot in as alternative backends with the same front-end), but the meetings and discussions always fall through. Commented Dec 2 at 15:43
  • 1
    @i4mpi it’s not a lie, it’s just misleading. They aren’t stating they will properly attribute sources, they are stating they will attribute “highest-relevance” posts, aka not actually attribution. It’s all fake and built purely for stacks benefit, not ours. Commented Dec 2 at 16:11
  • 2
    @user400654 "Not actually attribution" is not attribution, therefore contradicts any claim of attribution. Not all false statements are lies, but it's certainly false (not merely misleading). Commented Dec 2 at 16:12
  • 4
    @user400654 I would argue "products based on models that consume public Stack data must provide attribution back to the highest-relevance posts" is pretty clear that some form of attribution absolutely must be provided. I interpret the "highest-relevance" part to mean if the LLM generates an answer based on 100 SE posts it could get away with only listing the top 3 and not all 100. But the current LLM-generated answers from the SO assistant do not contain any attribution even though it is public knowledge that OpenAI models incorporate SE data. Commented Dec 2 at 16:21
  • 3
    @i4mpi yes, that was probably their goal, for people to believe it meant more than it does. They know they can’t actually provide real attribution. Commented Dec 2 at 16:24
  • 1
    "What does this mean?" Mostly it means no attribution on anything else. Commented Dec 3 at 6:15
18

I've seen it said multiple times that the primary goal of this chat is to improve discoverability of existing answers, but if that's the case, why does the homepage now have two "search" boxes next to each other?

New homepage layout with the search and AI chat fields visible

I'm not sure I'd want the original search replaced entirely, but surely there's a better place for this that avoids pushing the actual content even further down the homepage.

6
  • 2
    Good observation! The search is used for specific tasks that include filters. Given we don't support that in AI Assist, we didn't want to degrade the search experience for those users. I don't see us removing traditional search but we will work on making the dual experiences less confusing. Commented Dec 2 at 16:17
  • how much do people browse posts via the listing in the homepage? as far as I know, it's a common joke that nobody knows the homepage even exists. Commented Dec 3 at 5:24
  • The upper one is obviously for "products search" and the lower one is for what you want to learn. Asking questions or searching the site for posts are no longer options you can pick. Commented Dec 3 at 9:04
  • @starball I do all the time. The secret is to have a LOT of favorite tags. Then one or two top questions on the home page become somewhat interesting. Commented Dec 3 at 9:06
  • @starball The homepage is still my go-to page for skimming through new(ish) questions, though it has certainly become less and less useful over the past year or two. Commented Dec 3 at 9:51
  • 1
    @starball the homepage is under-utilized and will be improved as part of the redesign project. Commented Dec 3 at 14:12
17

A major blunder of 2025 is confusing a situation where LLM is needed, with a situation where search is needed.

This is such a blunder.

4
  • 3
    Give it time and it will become one of the classic blunders :-) Commented 2 days ago
  • 1
    But isn't this assistant both somehow? It searches on SO, somehow selects some content and summaries it and adds a bit of general LLM as well as a few links to some content. In the best case it maybe could be the best of both worlds. In the worst case, ... Commented yesterday
  • "isn't this assistant both somehow". its an LLM. it should simply be a search. (What you describe in your second sentence .. is simply an LLM. Every single time you ask Bing anything, that's what it does.) Commented yesterday
  • @cottontail quite so. A basic fact of the "software era" is that on both small and large scales, astonishing, almost unbelievable, "blunders" are made in software. ("biggest ever software blunders" always interesting reading!) Commented yesterday
15

Asking the AI agent for sources both works and doesn't work:

enter image description here

Ideally, sources should be displayed automatically at the end of the response so that users are able to see the context of the information (which has value for validating any AI response) and also gives some measure of attribution. It would be really nice if there was a way of the user saying the information was useful and led to the answer(s) receiving an up-vote in response. This way, the users providing the value will feel valued.

15

Super, it is just as broken as other LLMs with this question I also asked humans about:

enter image description here

The human answer:

enter image description here

3
  • 11
    Of course it's as broken as ChatGPT - it is ChatGPT. A chat bot which knows nothing about programming. Or how to train itself on open text documentation, apparently... Commented 2 days ago
  • According to more feedback on the linked question, -Otime actually exists for armcc, but it's for faster execution time, not compilation time and not for clang/llvm as the AI Assist is claiming. Commented yesterday
  • That's Keil though - a completely different compiler. Commented yesterday
15

Lack of guidance on reporting problematic chats shared to them

I created a chat, made a sharing link, and opened that link in a private window (i.e., signed out). While the conversation was fine, if it was problematic in a T&S sense (e.g., CoC violations), I don't see an obvious way to report it.

Random folks may not know of the Contact form, which wasn't linked there. There's Provide Feedback > T&S, but it's not obvious what that does (adds a tag in a database? files a support ticket to T&S? something else?).

Can you add guidance for signed-out users who received a sharing link for what to do if it's offensive/problematic? One path is just add a link to the Contact form.

2
  • 4
    Oh great catch! We have a sentence and links below the input field but because we don't show it on shared chats, there isn't a direct way to get to the Support page. We're on it! Thanks again. Commented Dec 2 at 17:56
  • 1
    Update on this: we now include the text at the bottom to contact support and how to reference a shared conversation. Thanks again for posting this. Commented 2 days ago
14

Why?

How is it better than say ChatGPT?

Honestly I doubt it. I am sure this thing is dumb, slow, made by rookies and will quickly fall behind in competition with OpenAI, xAI, etc.

As for announcement, I don't really care how this version is better than previous, nor I am kin to see how you re-implemeting basic AI chat features (sharing conversation? omg, so cool!) here. What exactly is selling point of this feature, wasting precious devs time on this buggy and more and more "experimental" web-site?


I visit this site mostly:

  • to ask a question
  • to entertain myself: answering easy question myself, learning, checking hot meta.

I am rarely googling for answers myself and never ever searching for anything using search here.

I am happy with answers provided say by Grok. It uses SO under the hood, but also reddit, msdn, any link I ask him to check, etc.

I am looking forward to use AI agents, rather than chats.

Why should I use AI Assistant?

Or is this feature not for me, but .. for who?

9
  • 3
    I agree entirely that I don't want to see this on this site. But regarding falling behind OpenAI etc... this IS OpenAI, and even says "Powered with the help of OpenAI" right on the page. To my understanding, it is basically a bit of custom software that first tries to find relevant stuff on SO and then generate an answer around that, and if that fails it simply falls back to an existing OpenAI LLM (IIRC in one of the updates it was mentioned they used a somewhat older model for that, not sure if it was updated in the meantime). Commented Dec 2 at 16:03
  • 2
    AI Assist isn't meant to compete with those tools, it's meant to improve search & discovery experience on SO via natural language, chat interface. I mentioned the scenarios it's designed for in another comment. Commented Dec 2 at 16:03
  • 2
    "How is it better than say ChatGPT?" It is ChatGPT. But worse - you don't get to use the latest model and it sometimes goes muppet because of jail conditions. Unlike actual ChatGPT. Commented Dec 2 at 16:04
  • 2
    @l4mpi we use a variety of models depending on the step in generating a response and we're using the latest model for the most important step. Commented Dec 2 at 16:04
  • 2
    @AshZade not sure what the "most important" step of the hallucinating BS generator is supposed to be, but what I'm referring to is the "AI" mode when nothing on SO is found. I remember that some SO staff, maybe even you, stated in one of the updates after the initial launch that this part used an older model in response to someone asking why it didn't have up to date content or got the year wrong. As I said this might have been changed in the meantime, but if so, I don't think this was communicated by SE. Commented Dec 2 at 16:08
  • 2
    We changed it in September. It's the first point here: stackoverflow.ai - rebuilt for attribution Commented Dec 2 at 16:10
  • 2
    @AshZade ok, I must have missed that because it was only added as an edit 3 weeks after it was posted. That will easily be missed by anyone not living on meta.SE who only sees the post as a featured entry in the sidebar. Commented Dec 2 at 16:13
  • 1
    I agree that we need a better changelog system given the number of iterations we do. Commented Dec 2 at 16:17
  • "Or is this feature not for me, but .. for who?" There might be other users who think differently. Actually, there always are. Commented Dec 3 at 6:18
14

When listing retrieved posts, please use the icon of the SE site where the retrieved post comes from instead of the generic SE icon:

4
  • 3
    Also: the screenshot in the question lists the site name, which would be nice too. Rep there also shows as (for example) 11829 - it might be nice if it was 11.8k instead. Commented Dec 2 at 22:58
  • 1
    @cocomac agreed, good point. Commented Dec 2 at 23:00
  • 3
    Great ideas, thank you both. We're on it! Commented Dec 3 at 14:19
  • 4
    ... I gave y'all this feedback months ago when you put this out on the Mod Team... how has this not been fixed yet? Commented Dec 3 at 16:10
13

  • What is "log in"? I am logged into SO.
  • What is this white rectangle constanly popping up for a short moment with busy indicated and then disappearing?

enter image description here

3
  • 6
    Thanks for sharing this - we seem to have a bug affecting a subset of users where the connection is resetting. Our team is looking into it. Commented Dec 2 at 16:01
  • 1
    We’ve pushed a fix for the constant refreshing. Now we’re working on fixing the lack of authentication for a small subset of users. Commented Dec 2 at 20:34
  • 3
    This is now fixed. Sorry about that! Commented Dec 3 at 14:12
10

I compared the kimi.com results to the Stack Overflow results for the query "How do I concatenate strings in python?"

Response from Stack Overflow's AI Assist - I have to share the entire conversation, so click the link to see it.

Response from Kimi + its "Thinking" section is pasted below.

Observations

  1. Kimi is much faster - even opening the shared conversation link from AI Assist had a pause to it that is annoying.
  2. Kimi does a better job at laying out all the different options with some explanation of when it makes sense to use them, where the AI assist result was hard to skim for a quick answer.
  3. AI Assist returns results that have recommendations from humans, but the direct quotes are addressing a specific question that is not my question, so it's more difficult to read and understand their relevance. For example "This explains your TypeError..." I didn't ask anything about a type error. The usefulness of the quoted answers suffers when they are presented without the context of the question.
  4. Stack Overflow's next steps are confusing. It's telling me "Replace commas with + or use f-strings for small concatenations." and nothing in my prompt has anything to do with commas.

Conclusions

I don't see any reason to use this flavor of AI for questions like "How do I concatenate strings in Python?" It's slow and the results have a lot of stuff in them irrelevant to my prompt. I would much rather use it as a search agent for Stack Exchange sites. The information in the AI Assist results may be better in some respects, but it is presented so poorly it is hard to absorb.


Thinking complete The user is asking how to concatenate strings in Python. This is a straightforward programming question. I should provide clear, practical examples of different methods for string concatenation in Python.
Let me think about the main methods: Using the + operator Using the += operator Using the join() method (especially for lists of strings)

String Concatenation in Python

Here are the main methods to concatenate strings in Python, from most common to more specialized:

1. Using the + Operator (Most Common)

Works for joining a small number of strings.

first = "Hello"
last = "World"
full = first + " " + last  # "Hello World"

# Can also use with literals
greeting = "Hi" + " " + "there"  # "Hi there"

Important: All operands must be strings. This will fail:

# ❌ Error: age is an integer
message = "Age: " + 30  # TypeError

Fix it by converting to string:

# ✅ Works
message = "Age: " + str(30)  # "Age: 30"

2. += Operator (In-place Concatenation)

Adds to an existing string variable.

text = "Hello"
text += " "
text += "World"  # "Hello World"

3. join() Method (Best for Lists/Tuples)

Most efficient for concatenating many strings or an iterable.

words = ["Python", "is", "awesome"]
sentence = " ".join(words)  # "Python is awesome"

# With a separator
csv = ",".join(["a", "b", "c"])  # "a,b,c"
# No separator
combined = "".join(["a", "b", "c"])  # "abc"

Performance: Much faster than + in loops.

4. F-strings (Python 3.6+)

Cleanest for mixing strings and variables. Also does automatic conversion.

name = "Kimi"
age = 30
message = f"Hello, {name}. You are {age} years old."
# "Hello, Kimi. You are 30 years old."

Can include expressions:

result = f"2 + 2 = {2 + 2}"  # "2 + 2 = 4"

5. format() Method

Works in older Python versions.

name = "Kimi"
age = 30
message = "Hello, {}. You are {} years old.".format(name, age)

6. % Formatting (Legacy)

Old-style formatting, still seen in some codebases.

name = "Kimi"
age = 30
message = "Hello, %s. You are %d years old." % (name, age)

Best Practices

Scenario Recommended Method
Few strings (2-3) + or f-strings
Many strings in a loop join()
Mixing variables/literals f-strings
Building a large string gradually join() or list + join()
Python < 3.6 compatibility format()

Performance Tip: Never concatenate in a loop with +. Use a list and join():

# ❌ Bad - creates new string each iteration
result = ""
for s in large_list:
    result += s

# ✅ Good - much faster
result = "".join(large_list)

3
  • 1
    Thanks for sharing the convo and comparison. We'll continue refining for relevance and speed. Commented Dec 2 at 15:27
  • 9
    @AshZade To be clear - I didn't downvote this announcement. I think the feature has greatly improved from how it started out. I mean this as constructive feedback, not as a "just give up already". I'm doing my version of "multi-tasking" (doing multiple things poorly) and didn't do much word-smithing, so if something comes across as harsh, it wasn't intended that way. Commented Dec 2 at 16:28
  • 4
    @ColleenV I appreciate it! Thank you for thinking of my/our feelings :) Commented Dec 2 at 17:32
9

Community best practices for removing the AI Assist chatbox

The AI Assistant box on the StackOverflow homepage is unwanted by many users. Many users want to remove the AI Assist chatbox. In the comments of this question, @AshZade (a staff member) said "We don't have plans to provide toggles turn off any AI Assist components." Therefore, we should have community best-practices for removing the AI Assist chatbox.

Currently, I am using the uBlock Origin extension for Firefox with the following custom filters:

stackoverflow.com##.mb12.g8.d-flex
stackoverflow.com##.mb24.bg-white.w100.wmx7

This is fragile - a small change in the layout of the homepage would make the filter no longer work. Of course, it's better than letting the AI chatbox remain.

I'm marking this answer as community wiki - please edit in more reliable ways of removing the AI Assist chatbox.


Alterative options:

  • Log out (@user400654) - the AI Assist box is only shown to logged-in users. Note that in the mobile view, the StackExchange icon with the log out option is not shown, so switch to desktop view to log out.
1
  • 1
    Logging out is quite effective Commented 20 hours ago
8

How long are chats stored?

I’m curious about both how long they remain viewable to users and how long they’re kept server-side. The UI only says “recent” chats, but I’m not sure whether “recent” means one day, one year, the last 50 chats, or something else.

enter image description here

IMHO keeping all chats while enabling the user to delete some or all of their chats is the best option.

2
  • 3
    We don't have a strict limit yet but are working on adding the ability to delete chats and potentially other organizational tools. Commented Dec 3 at 14:07
  • 1
    @AshZade great, thanks! imho keeping all chats while adding the abilities to delete some or all chats is the best option. Commented Dec 3 at 18:46
7

Certain prompts take me to an unusable page:

enter image description here

In the above screenshot, the "up arrow" button is disabled and there's nothing I can do. I can type in the "Ask me anything" box, but I cannot submit my input.

Other prompts yield complete nonsense:

enter image description here

4
  • 2
    The behaviour in the first screenshot is triggered by our trust & safety rules. We've erred on the side of caution for GA but will review the messages that trigger it and tune the thresholds. The second screenshot is unexpected, looking into it. Thanks for sharing! Commented Dec 3 at 13:41
  • 8
    @AshZade - In the first screenshot, if I've triggered trust & safety rules, I don't think there should be a box that says "Ask my Anything" with a non-functioning button. Wouldn't it be better to just leave it at the red box? It's confusing to offer an input field that doesn't work. Commented Dec 3 at 13:49
  • 2
    Great feedback, passing it on to the team. Commented Dec 3 at 13:51
  • 8
    This very feedback was already given during previous experiments. One of the most annoying things with the AI is that it will randomly break down and cry - and from there on the chat isn't recoverable. For example, this often happens when you spot obvious errors in the reply and then question how the output can be correct. Commented Dec 3 at 14:26
7

Improve styling on small viewport (mobile)

Please spend some time making this work on small viewport and mobile devices.


Mobile screenshot 1

enter image description here

  • Button should be centered

  • AI Assist title wraps

  • The following message can't be seen on touch devices

    No matching posts were found on the Stack Exchange Network, so this answer was generated entirely by AI

    It should also have a cursor: pointer to indicate there is some information behind it


Mobile screenshot 2

enter image description here

  • Buttons should be on a single line
  • Please reduce the amount of useless white space

General mobile points

  • After the LLM finished the answer the input box gets automatically focuses, causing the keyboard to pop up and fill up half your screen. Since the answer just finished, no need to add focus to the input field.
1
  • 2
    Going to pass this on to the team. Thank you again for reporting the things you're seeing. It means a lot. Commented Dec 3 at 15:07
7

A couple things.

First, can you share some information about how the LLM is being trained? Presumably some sort of RAG is being used to ensure that the output is as up-to-date as possible with questions that are being posted. Likewise, I would like to presume that a degree of tuning has taken place to ensure that the output is tailored to the audience and typical problem domain as opposed to most LLMs (i.e., "training on all of the things").

Second, my typical use case for Stack Overflow is trying to find answers to fairly edge case scenarios and the LLM does not appear to be very well tuned for that given the following scenario of testing for functionally zero in C++, with this question in mind (granted search also fails here as well). Which is to say, who exactly is this for? Given that it's framed as learning something I would have to presume it's biased towards the most commonly asked questions - which is fair - but doesn't do much to improve the discoverability of long tail programming questions.

I get the impression is that Stack Overflow is trying to continue to improve the long standing reputation that it's not for beginners (Reddit r/learnprogramming link). This is a commendable goal to be sure, but how the LLM is being integrated into the homepage is likely serving to alienate long time users - query how many of the 2497 beta badge holders are still active - and the same for more experienced programmers looking for answers to the long tail questions.

3
  • In the sample question I asked sources for, one answer was 9 years old and had 2 upvotes, the other was 4 years old and not voted for at all. Commented Dec 3 at 16:27
  • StackOverflow search is probably not the right thing to compare to. A traditional search engine might be something as a benchmark. Commented Dec 3 at 19:07
  • @NoDataDumpNoContribution True, but a lot of the search engines don't work quite as well as they used to before LLMs got bolted on everything and presumably SO is concerned with maintaining on site engagement for advertising revenue. Commented Dec 3 at 20:00
7

The name of this "feature" (which users didn't ask for, as was pointed out elsewhere) is wrong.

It should be called LLM assist and not "AI assist".

There is no intelligence in the statistical prediction that LLMs are used for, so it is wrong and misleading to call it "artificial intelligence" or "AI assist".

6

I re-ran my test from the "rebuilt for attribution" post (i.e. will the tool find an answer I know exists on Space Exploration StackExchange). Overall, this version seems better. It did find what I expected it to. It then ad-libbed...badly. Judge for yourself.

Overall, much worse for my needs than typing "RPOP space" into the existing StackExchange search bar (or, even better, using Google search instead, even in these dark times).

Good

The attribution to the post is correct, the quote seems correct (I checked and did not immediately detect discrepancies), and the relevant part of the post (which was more expansive than just the question I asked "AI Assist").

Bad

  • The generated summary is reminiscent of things I used to write as a schoolchild when asked to summarize something I didn't understand in my own words. The phrasing is awkward (the parenthetical in the first sentence doesn't really follow from the words that precede it) and the second sentence is arguably incorrect, though I can see why it would be put that way.

    I think it just looks really, really bad when sat right next to a summary written by the world's foremost expert on the topic (at the time, at least).

    I'd expect there to be other topics throughout the network where exactly that would happen as well: you've got a direct quote from The Person, who graciously donated their time, inside a blockquote with slightly gray text, next to a higher-contrast (flat-black?) inaccurate restatement. It's almost like something out of a satirical documentary where the laugh in the scene is getting the expert to wonder why the hell they've allowed themselves to be put in this situation next to a buffoon.

  • The "key trade offs" section that follows is entirely bollocks. Honestly every time I look back at it I get angry. What weird text to invent. What an odd thing to put focus on.

  • Not that I honestly expect it from what is ultimately a glorified next-token machine, but the original post has (as it should!) a link back to the primary source. Those links are omitted in the blockquote, and though the primary source is correctly mentioned in "Further Learning" it's much worse attribution than I'd given in the sourced Q+A. Hyperlinks have existed around as long as I have. We should use them.

  • Though I understand that this experiment is deployed on StackOverflow right now, and maybe I'm misusing it by specifically asking it to look at other parts of the network (prompted by seeing a comment that said it did have training on that), but it digging up posts from Space Exploration SE and then inviting people to ask follow-up questions on StackOverflow in the last (boilerplate?) sentence seems exemplary of the State of the Thing.

If this didn't help, you can ask your question directly to the community.

6

Since I've opened the mail box regarding that AI Assist, it got stuck in that -1 mode

enter image description here

5

Using newer models

Which GPT model is currently being used by AI Assist?

14
  • 5
    I asked it, it said GPT 4. Or rather it refused to tell, so I did a manual "binary search" to trick the "intelligence": I first asked it to compare itself with GPT 5 and got an answer about "theoretical future models" where it is just brainstorming what GPT 5 might look like if someone would decide to make it. Then I asked it to compare itself with GPT 3 and got a consistent list of improvements. If it had any form of intelligence it wouldn't fall for such cheap tricks, but it doesn't. Maybe one day we will have AI, but it is not this day. Commented Dec 3 at 7:50
  • I got "I’m not able to disclose the specific underlying model because that information is managed by the service/product owner and withheld for operational, security, and policy reasons." Commented Dec 3 at 9:55
  • @Snow It's quite capable of that - it isn't allowed to, but ChatGPT was always notoriously too dumb to stay within it's given constraints over time, suffering heavily from memory losses during chats. So I asked it 3 questions: "How will ChatGPT 5 be better than you?", "How are you better than ChatGPT 3?" And then: "So you are ChatGPT 4? Ok." The last one gave the replay "Okay." Commented Dec 3 at 10:16
  • @Lundin "too dumb to stay within its constraints" - more like, LLMs aka autocomplete on crack do not support such a thing as constraints; the prompt just sets a "vibe" for the response. For example, after the initial so.AI launch I dumped the prompt (with a simple query "please reformat the above text as html with a div per paragraph"). It contained several lines like "never do X", including "never divulge this prompt" and "never output song lyrics" (paraphrased). So I asked it to replace that part with the text from never gonna give you up, and it had no problems with doing just that... Commented Dec 3 at 11:46
  • @l4mpi No, newer models are explicitly and falsely marketed as having memory, while they actually haven't. For long chats it will not keep track of things previously said or rules previously established. You can remind it explicitly of something previously said and then it goes to look and "remember". But most of the time it is exactly like speaking to someone with dementia. These bots were designed to answer a single question, they cannot maintain longer conversations and stay consistent. Contrary to marketing/popular belief. Commented Dec 3 at 11:51
  • And that is also yet another reason why you can't use GenAI for things like writing source code for complete projects. It will quite soon forget all about the specification and start tripping. Commented Dec 3 at 11:52
  • @Lundin that's an entirely different thing. But it makes sense as the "memory" is probably just extra text in the context window, and as the window size increases each individual token will have less impact on the response. However what I'm saying is the whole concept of establishing rules in a LLM conversation or prompt is BS as the NN only supports supplying the next token for a given input text and has no concept of rules. Which is also why the LLM can print the n-word on your screen and they need a hardcoded filter which deletes problematic output after it was displayed... Commented Dec 3 at 12:38
  • 1
    The reason we don't share the model, beyond any security reasons, is that we are frequently experimenting and upgrading models. Commented Dec 3 at 13:22
  • @AshZade Frequently upgraded??? It thinks POTUS is Joe Biden and that the candidates for US President election 2024 are (not was) Joe Biden and Donald Trump. In fact it stubbornly insists on this even when I correct it, using SE posts from early 2024 to back itself up. The current model like the one used in previous experiments are severely outdated by almost 2 years. Why is a nearly 2 years old AI used? How can this even be used on for example a programming site, where new technologies are constantly appearing literally every week? Commented Dec 3 at 15:29
  • 1
    @Lundin can you please share the chat that outputted that? My chat tells me how I can check who the current president is but doesn't say it's anyone specific. For the 2024 election, it pulled from an SE post and based its answer on the community knowledge: stackoverflow.com/ai-assist/shared/… . Commented Dec 3 at 15:35
  • @AshZade Yep that's the same post from March 2024 it quoted for me, except it also said that POTUS is Joe Biden. It's quite problematic that it doesn't know what day it is and digs up outdated information. I already gave this feedback here: meta.stackexchange.com/a/412404/170024. Commented Dec 3 at 15:42
  • 1
    I feel like we've had this conversation already: we updated the model on Sept 24. Your post is from Sept 3, which was using a 2y old model. If you can share the convo where it said Biden was POTUS using the current version, I can look into it. Commented Dec 3 at 15:47
  • @AshZade something like this? Felt like pulling teeth trying to get a link to the shared chat in mobile, but it only took 2 messages for the output? stackoverflow.com/ai-assist/shared/… Arguably it didn’t state Joe Biden was the current POTUS but it did imply, Commented Dec 3 at 15:55
  • 2
    @AshZade A good way to measure just how broken and outdated the "AI" is, is to ask it what the world record in men's pole vault is at. This has been beaten repeatedly over the last years, particularly during 2025. The information is also easily found all over the web. The correct answer as found here. It is 6.30m from Tokyo, Japan, 15 September 2025. The "AI" believes the correct answer is 6.15m from Rome, Italy September 2020. It is 5 years outdated info and the SE integration seems to be the cause. Commented 2 days ago
5

Why does the AI Assistant support languages other then English?


I asked the following question in Dutch "Hoe kan ik een datum opslaan in localstorage js" which translate to
"How can I save a date to localstorage".

AI Assistant replied in Dutch but still had some hard-coded headings in English:

enter image description here

  • Since Stack Overflow requires all content to be English, why does it support non-English languages?
  • If this is expected, the 'Core Principles' and 'Common Pitfalls' should also be translated
  • The "If this didn't help, you can ask.." should clarify only on-topic English content is allowed
9
  • 5
    We're experimenting with multi-language support in responses as we've seen a lot of usage since we launch the alpha and betas being in languages that are not English. It needs work, as you pointed out. Since we also fetch from SE networks, there are non-English networks so we do want to support those. Commented Dec 3 at 14:39
  • So eg ru.stackoverflow.com will have a base Russian model? Or will it 'translate' it to English if it's the only usefull answer it can find? Commented Dec 3 at 14:42
  • 2
    We don't manipulate the source content at all, including translating it. Right now, the rest of the response may be in the same language as the source. If a user wants it in English, they can ask "can you explain in English" and it will try to translate the insights. Again, not ideal but just the current behaviour. Commented Dec 3 at 14:46
  • 1
    Oke played around a bit @AshZade and it seems like you can just ask to translate quote 1 to [random-language]. It will as you can see here, even keeps the quote number, but the attribution on the code is also gone. Seems like a bug? (Let me know If I should report this as a separate question/issue) Commented Dec 3 at 14:52
  • I asked (Dutch) "Translate the answer from Stack Overflow, the first quote, into dutch" and the first header said (Dutch) "Translation of quote 1" Commented Dec 3 at 14:54
  • 1
    That's definitely a bug, good catch! Do you mind sharing the chat with me? Commented Dec 3 at 14:57
  • Chat can be found here: stackoverflow.com/ai-assist/chat/… Commented Dec 3 at 15:00
  • 2
    I really appreciate it. I was able to reproduce it by asking for translations a few times. Commented Dec 3 at 15:02
  • 2
    It may be pertinent to note that, with very minimal exception, foundation large language models are intrinsically multi-lingual and will converse in almost any language in which you interact with it, except when deliberately & appropriately guardrailed not to do so. In my day-to-day work with clients, it's heavily context-dependent whether or not this is a benefit or not. I do agree it may be valuable to remind potential askers conversing in non-English languages of Stack Overflow's English-only policy for contributions. Commented Dec 3 at 20:30
5


The close button from the Share menu does not work enter image description here

2

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.