ChatGPT/AI is coming for the workplace
Microsoft, in its latest “Work Trend Index” report, promises that the use of artificial intelligence (AI) at work is the technology that’s going to make all of our jobs easier.
The company says it has found that the constant inflow of data, emails, meetings, and
notifications workers must deal with daily has placed us all in a “digital debt.” Since humans can not physically and mentally keep up with such onslaughts of information, AI technology is at the ready to assist.
“Across the Microsoft 365 apps,” the report states, “the average employee spends 57% of their time communicating (in meetings, email, and chat) and 43% creating (in documents, spreadsheets, and presentations). The heaviest email users (top 25%) spend 8.8 hours a week on email, and the heaviest meeting users (top 25%) spend 7.5 hours a week in meetings.”
Microsoft proposes an “AI-employee alliance” which would provide workers more time to focus on important tasks and give them real opportunities to enhance their creativity.
The company says it believes the impact of AI will be evident by the year 2030. “When asked what changes they value most, people imagined producing high-quality work in half the time (33%), being able to understand the most valuable ways to spend their time (26%) and energy (25%), and never having to mentally absorb unnecessary or irrelevant information again (23%).”
Microsoft is, of course, the financial backer of the startup research firm Open AI, the company that created the artificial intelligent systems ChatGPT, DALL-E 2 and GPT-3. This past January, Microsoft announced it had provided OpenAI with a “multiyear, multibillion dollar investment to accelerate AI breakthroughs.”
But contrary to Microsoft’s rosy outlook on the promises of AI, many workers have been concerned about AI’s ability to imitate human-like intelligence. If the future means we will see AI-programmed robots who are capable of doing human jobs, workers are wondering where they will fit in.
During a May 16 hearing by the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, OpenAI’s CEO Samuel Altman for the most part agreed that elements of AI will need to be regulated. Altman said, “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” The OpenAI CEO even suggested the U.S. government should create a new regulatory agency to monitor AI and its licensing and testing requirements. He said, “If this technology goes wrong, it can go quite wrong. We want to be vocal about that. We want to work with the government to prevent that from happening.”
“The basic question we face is whether or not this issue of AI is a quantitative change in technology or a qualitative change. The suggestions that I’ve heard from experts in the field suggest it’s qualitative,” Senate Majority Whip Dick Durbin (D-IL) said during the hearing. “I’ve heard of the positive potential of AI, and it is enormous. You can go through lists of the deployment of technology that would say that an idea you sketch for a website on a napkin could create functioning code. Pharmaceutical companies could use the technology to identify new candidates to treat disease. The list goes on. And then, of course, the danger, and it’s profound as well.”
Sen. Richard Blumenthal (D-CT) advised that “Perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs.”
Yet, just a month prior to the senate hearing, Jessica O. Matthews, founder and CEO of the software company, Uncharted, told attendees at an AfroTech Executive event in Seattle, Wa that AI is nothing to be afraid of.
“You should not be afraid of AI, you should be afraid of the people who are building it,” Matthews declared. “AI, artificial intelligence, .. it’s kind of like a child; it’s like a robot baby. ChatGPT is at best a sassy 7-year-old, and we all know that 7-year-old … [who] recently grew up with all the social media platforms and be out here talking to you like they grown. And you’re like, well damn girl, you grown. No, they’re just online.
“Do not like have this 7-year-old do your taxes. It might go well sometimes––until it does not.”
Matthews said the problem with AI is that it needs to be demystified. It’s basically a code that has the ability to learn. It’s often explained as an algorithm, which can sound scary, but algorithms are basically processes, she clarified. “If you have a process for anything, that’s an algorithm … And all it actually comes down to is how are you teaching that [process] to an artificial intelligence? How are you teaching that to this robot baby so that it can start to do that for you?”
If the only people with access to teaching AI robot babies have intentional or unintentional biases, they are “framing the way that this child should observe and respond to the world,” Matthews said, and that is what we should fear.
The post ChatGPT/AI is coming for the workplace appeared first on New York Amsterdam News.