[ad_1]
OpenAI’s ChatGPT3 is impressive and frightening. The artificial intelligence program can write authoritative-sounding scholarly papers, computer code and poetry and solve math problems — though with some errors.
It passed a tough undergraduate microbiology exam. and graduate law and business school exams from the Universities of Minnesota and Pennsylvania.
It’s been paired with the email program of a dyslexic businessman to assist with clearer communications that helped him land new sales.
The technology has ignited fierce debate. Is artificial intelligence a jobs killer? Can the integrity of academic credentials be protected against plagiarism?
The answer is yes — if your work is fairly structured or regulated. OpenAI is working on systems to identify AI generated text, but has not been particularly successful so far.
Creating tools that can help lawyers draft briefs and programmers write code more quickly, automate aspects of white collar and managerial critical thinking and assist with elements of creative processes offers huge business opportunities. For example, Microsoft
MSFT,
is investing $10 billion in Open AI and Alphabet’s
GOOGL,
Google is ploughing cash into ChatGPT rival Anthropic.
ChatGPT answers questions by crawling the web to find patterns through trial and error. It is tutored by humans and through client feedback, and should become more accurate through use. It seems best at offering established thinking on issues. When asked for a market-beating stock portfolio, for instance, it replied that you can’t beat the market.
ChatGPT isn’t prescient and requires human supervision for any application where mistakes could cause emotional, financial or physical harm. Software engineers may be able use it for first drafts of complex programs — or modules inside larger projects — but I doubt Boeing
BA,
will put AI generated code into its navigation systems without close human engagement.
Overall, ChatGPT will become another tool to help people accomplish more and bigger tasks more quickly and reduce the number of people in more mundane, less-satisfying activities.
Privacy matters
Like the robot, AI will free up people for more sophisticated work. So much of what we think and do is not mechanical or formulaic, but requires weighing tradeoffs and applying values to grey areas.
At work, we may translate company policy that’s more than what’s in the manual but also built from decisions sanctioned by policymakers — informal precedents. Our personal decisions leave digital dust on our computers, phones and internet accounts. Most successful people are reasonably moderate in disposition and wrestle with tradeoffs among when allocating scarce resources and choosing strategies. It comes down to internalized algorithms and assessments of risk.
That’s where the danger lies. How we think and act is the sum of what has been poured into us through childrearing, education, experiences and nowadays, what we find on the internet. Moreover, our personalities may be revealed by websites we visit, where we travel and emails, to name a few.
ChatGPT and AI will be more effective if permitted to mine some of that information. The more access we afford AI programs, the more quickly and effectively they will serve us. Thia raises opportunities for rewards and praise, but also the danger of censure and terrible loss of privacy.
Peter Morici is an economist and emeritus business professor at the University of Maryland, and a national columnist.
More: If you’re investing in AI stocks, watch out for these revenue and earnings tricks
Plus: ChatGPT may be good at your job, but AI is a terrible stock picker
[ad_2]
Source link