Three thoughts:
These AI are acting like user-agents, in the "Hyperland" (Douglas Adams) sense of the term, for us. Instead of seeking out potentially-incorrect information from identifiable sources on the web, a proxy can now present some of that without informing us where from! (hooray)
Would an AI trained entirely on correct information without any intrinsic bias in the training set ever produce inaccurate or biased answers?
Do any pub quizzes permit some upper-bounded percentage of their team to be artificial intelligences (..yet)?