I guess I don’t have to worry about my job going away quite yet. This is what Twitter’s AI thingy thinks is currently happening in the industry I work in.
This would require judgement and discernment that "AIs" do not possess. At best, they could be used to check scraped data from various human-picked trusted sources and collect them together for analysis.To be fair, it is not too far off from journalism we see splattered across the web. Clickbait titles that don’t match the actual story, no verification of facts, and little attempt made to determine the true story are probably more common than true journalism.
What would be nice is if AI could build up a list of trusted sources, or research and find those people for an industry and then conduct follow up fact checking with these people. That is time-consuming work human “journalists” rarely do anymore. The article could then quote and list their sources. If a source wished to be anonymous, it would need to verify with a minimum consensus or label those facts as potentially untrustworthy.
Real, old-school journalists used to do this all the time. ENWorld does this. Many do not.
Well to be honest there's lots of "humans" that also fail this test.That is the problem in a nutshell.
Currently, AI cant tell the difference between "trending" versus true.
Hence, passing the Turing Test is soon.Well to be honest there's lots of "humans" that also fail this test.
AI have passed Turing Tests several times already. It is an outdated benchmark now for several years.Hence, passing the Turing Test is soon.
I can still easily distinguish whether I am talking with a human or with a computer.AI have passed Turing Tests several times already. It is an outdated benchmark now for several years.
I regret to inform you that you'll be waiting for Alan Turing for a long time. He ate a poison apple in 1954 and died.I can still easily distinguish whether I am talking with a human or with a computer.
I am still waiting for Turing, to be genuinely uncertain when conversing at length.
As others have noted, more than one AI has been able to pass a Turing test before--as in, any one given trial thereof. Whether or not it is even possible for any AI, even a theoretical true AGI that is itself conscious and sapient, to pass every Turing test ever applied to it...well, that just seems like an impossibly high bar.Hence, passing the Turing Test is soon.
Whether or not it is "outdated" is frankly irrelevant. It has had a serious, largely-unanswered criticism of its incompleteness since at least 1980, with Searle's Chinese Room argument. That something can mimic the syntax of a language is an inadequate proxy for whether that thing has a mind in the way that humans have minds. It may in fact be that the entity has a mind; but showing this purely through syntactic manipulation does not demonstrate that it has one.AI have passed Turing Tests several times already. It is an outdated benchmark now for several years.
Then you simply haven't been following the literature. Several programs have fooled people at length, and that was even before Chat GPT came on the scene. Chat-GPT as it stands (which, AIUI, uses GPT-3) has successfully passed more than one round of Turing testing--because the objective isn't to make it so people cannot ever tell they aren't chatting with an AI, but rather to make it so that the third-party observer cannot consistently (p=0.5) determine which of two chat participants is a person and which is an AI when both are attempting to speak as naturally as possible.I can still easily distinguish whether I am talking with a human or with a computer.
I am still waiting for Turing, to be genuinely uncertain when conversing at length.
I mean, this is a problem for human beings as well. Circa three hundred years ago, there were people confidently asserting that combustion is a phlogiston-dependent phenomenon. And, without pushing too hard against board rules, I'm sure you're aware of many contemporary examples of large numbers of human confidently and sincerely affirming falsehoods."AIs" encode only what they are exposed to in their neural network. If they are exposed to a falsehood, they will regurgitate falsehoods. These are distinct from what are called "hallucinations," where the AI is confidently wrong about something more or less by accident (AIUI, because it prioritizes good grammar above any other concern). Apparently, there are differences in code processing between hallucinations and (for lack of a better term) "sincere" responses, which means it is possible to train a module to detect these differences. However, even with such efforts, (a) "hallucinations" don't go away, they just become harder to identify, and (b) unknowing but "sincere" falsehoods cannot even in principle be detected this way.
So we still have the enormous problem of correcting AIs being confidently incorrect
Right. Which is not a problem in linguistics (syntax vs semantics). It's a problem in epistemology (evidence/warrant).This would require judgement and discernment that "AIs" do not possess.