I guess I don’t have to worry about my job going away quite yet. This is what Twitter’s AI thingy thinks is currently happening in the industry I work in.
Good enough is sometimes all we have. Solutions to the three-body problem, for example, are always "good enough" because, to the best of our knowledge, a general analytic solution is truly impossible. "Good enough" solutions got us to the Moon, got Cassini to Saturn, and a bunch of other stuff. "Good enough" specifically using "AI" may even help us solve the problem of fusion power (a recent German stellarator startup is using machine learning techniques to solve the problem of particles getting stuck in non-full-circuit orbits around the stellarator's internal magnetic field.)It also illustrates how humans who keep pushing AI think that something that's "close enough" be it trending vs factual, super fast predictive text v writing and image scraping/mashing vs art is equal to "good enough."
I don't think PR spin is what makes it funny. That would seem to be putting the cart before the horse--PR spin would be exploiting the humor that they hope is already present.I assume it's PR spin until it is less wrong all the time.
Sorry it was an obscure crowdfunding campaign and they ran out of baby dragons during fulfillment.Ah, but are the players getting baby dragons? Because if they are, did you get them off Amazon and can you share the link?
Because it can't. It is physically incapable of doing so, and unless a radical new development occurs, this will not, cannot change.
Yeah, I've heard some similar things. We're also running out of high-quality "natural" training data, which means we may even start growing a whole additional layer of "we don't know what's going on" by training one AI to generate high-quality fictitious data so that a second AI can use it as seed data for whatever the user actually wants to see happen. Personally, I'm real skeptical that such "synthetic" data can achieve anywhere near the same results as real-world data.According to my son (who is graduating with his undergrad & masters from Stanford in CS), AI is pretty close to its current limits because we don't currently fully understand how / why it does what it does. Until we humans get a better understanding of AI, we don't have much room to improvement the technology.
EDIT: Just want to clarify, there are still many untapped uses for AI as we currently understanding. That is still a growing field.
Not quite: the semantic content of I am sitting at a desk as I type this does not change whether the sentence is true or false. (This was an issue that Bertrand Russell wrote a lot about, around 120 to 110 years ago.)Whether or not something is factually true is part of semantic content. The meaning of the statement.
Alright. The actual truth value depends on content the AI cannot, even in principle, see or process.Not quite: the semantic content of I am sitting at a desk as I type this does not change whether the sentence is true or false. (This was an issue that Bertrand Russell wrote a lot about, around 120 to 110 years ago.)
But it is generally accepted that truth depends on semantic content.