These seem like interesting methods for simulating public discourse and achieving communicative rationality at some level. Its not just a wisdom of crowds, but offers a feedback mechanism that would guide the end result towards some reasonable equilibrium.
But does this become less effective the more technical and obscure the issue is? Are there empirical/conceptual barriers to this democratic form of decision making, which may be better left to experts in the field with the requisite background and evidence on hand? Moreover, you may have expected that social media and widespread communication would at least have moved others towards a more reasonable viewpoint, but on many issues, some have become more extreme in the face of feedback. Could LLM-led discourse similarly create "less" rational positions, despite expectations otherwise?
I think all of these are fair concerns! We'd probably need a lot of experimentation to see where they were more and less helpful, as with everything.
With highly technical questions, I think in particular there's a place for things like these playing an informative role. If for instance the question concerns waste disposal in a metro area (to pick a pretty boring and technical but life-affecting topic) then I think it's both true that experts are going to have the inclination and knowledge to dive into the details in a way the average person won't, but also that ordinary individuals will have a lot of tacit knowledge about how they might be affected by or experience things that experts don't know but also the individuals don't have any notion of how to solve. (For instance my toddler loves watching the trash trucks going through and it's actually a highlight of our week to see them go through and dump them in, so it would be nice if they consistently came before we had to drop him off for daycare - this is a small matter but add enough such concerns up etc.)
As for issues like polarization, I agree that's possible, although I'm not sure polarization is always irrational from a group perspective, since it allows additional parts of the possibility space of positions to get filled out and considered. Clearly sometimes it is bad of course!
I think my biggest concern is that the more mediated versions of these (especially ones like the last) end up being substitutes for, rather than supplements to, conscious deliberation. This is a major source of my dissatisfaction with that other knowledge aggregation and social decision algorithm, the price system; though that has its place too.
These seem like interesting methods for simulating public discourse and achieving communicative rationality at some level. Its not just a wisdom of crowds, but offers a feedback mechanism that would guide the end result towards some reasonable equilibrium.
But does this become less effective the more technical and obscure the issue is? Are there empirical/conceptual barriers to this democratic form of decision making, which may be better left to experts in the field with the requisite background and evidence on hand? Moreover, you may have expected that social media and widespread communication would at least have moved others towards a more reasonable viewpoint, but on many issues, some have become more extreme in the face of feedback. Could LLM-led discourse similarly create "less" rational positions, despite expectations otherwise?
I think all of these are fair concerns! We'd probably need a lot of experimentation to see where they were more and less helpful, as with everything.
With highly technical questions, I think in particular there's a place for things like these playing an informative role. If for instance the question concerns waste disposal in a metro area (to pick a pretty boring and technical but life-affecting topic) then I think it's both true that experts are going to have the inclination and knowledge to dive into the details in a way the average person won't, but also that ordinary individuals will have a lot of tacit knowledge about how they might be affected by or experience things that experts don't know but also the individuals don't have any notion of how to solve. (For instance my toddler loves watching the trash trucks going through and it's actually a highlight of our week to see them go through and dump them in, so it would be nice if they consistently came before we had to drop him off for daycare - this is a small matter but add enough such concerns up etc.)
As for issues like polarization, I agree that's possible, although I'm not sure polarization is always irrational from a group perspective, since it allows additional parts of the possibility space of positions to get filled out and considered. Clearly sometimes it is bad of course!
I think my biggest concern is that the more mediated versions of these (especially ones like the last) end up being substitutes for, rather than supplements to, conscious deliberation. This is a major source of my dissatisfaction with that other knowledge aggregation and social decision algorithm, the price system; though that has its place too.