Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Tech Tonic | At what point does all the AI, become too much AI?

Let me start with something that is slightly worrying. Elon Musk owned X says that as of November 15, when their new privacy policy takes effect, the data for X users can be used by third-party “collaborators” (that’s how the new privacy policy language articulates it) to train their artificial intelligence (AI) models. Did you sign up for this? I take this as an illustration of something that’s fast getting out of hand. Is the AI envelope around everything we do, becoming thicker than the earth’s ozone layer?
“The recipients of the information may use it for their own independent purposes in addition to those stated in X’s Privacy Policy, including, for example, to train their artificial intelligence models, whether generative or otherwise,” reads X’s incoming privacy policy. There are mentions of a mechanism to opt out of sharing this data, but as of now, there isn’t a setting or a toggle to suggest how to do that. Perhaps, an Elon Musk humanity-saving tweet shall shed some light on that in the coming weeks.
There was a simpler time when our collective data on the World Wide Web was harvested, to serve us ads, which made money go around and multiply for corporates. Data was the new oil, they said then. Data is the new oil, now too. Just that beyond ads, AI models signify the next stage of tech evolution. Whoever has the supremacy, has the ultimate supremacy.
At this point, a question has been burning inside — at what point does all this AI become too much AI?
I pondered this (though unrelated to X’s latest unforeseen yet not entirely surprising letdown, which happened later) as Adobe detailed the new capabilities across its apps including Photoshop, Lightroom, Premiere Pro and others, at the keynote and briefings at their annual MAX conference. Most of the new stuff that’s part of the latest set of significant updates, is underlined by AI, and their Firefly models. Video-generative AI is the next big thing. That’s something I’d detailed in my pieces from the trenches.
At the three main stage sessions including the keynote and all the briefings I got access to, the company left no stone unturned to push a case for Firefly and broader AI use. It is great to see Gen AI being useful in cleaning up our photos (removing wires from cityscapes and architectures is great) and helping fill up video edit timelines with quick generations. But as I asked Deepa Subramaniam, who is Vice President, Product Marketing, and Creative Professional at Adobe, is it changing the definition of creativity?
“The act of editing in Lightroom to me is not just about getting the photo I want, but reliving that photo through the act of editing and tapping into the nostalgia,” she told me. Her opinion is that a person using these tools should hold keys to unlock creative decision-making. Whether they want to remove those pesky and eyesore electricity cables spoiling the frame of that gorgeous architecture you’ve just photographed, or not. Or to improve the texture and colour theme of the sky as you saw at the sunset, instead of how the phone’s camera decides to process it. To do it or not, it must remain a human call — the option should be there, that’s Adobe’s take on the matter.
Yet, it may not be as simple. Generative fill for photos uses AI to add background and extend a frame, which perhaps didn’t exist or the human eye didn’t see. That’s one side of the coin. On the other side, professionals using Adobe Illustrator and Adobe InDesign software will disagree that too much AI is a bad thing. ‘Objects on Path’, for example, or even generating textures, graphics, patterns, or imagery — within a shape, vectors, or even letters. You may have a valid argument that a typical skill set you’d expect a designer to have may no longer be necessary between these powerful software tools, and the end result. Any human, with some sense of aesthetics and design, could get the job done?
That may perhaps be the point. AI can and must simply remain a tool. With human oversight, when required. The use case for Adobe’s tools, Canva’s tools, Pixelmator’s AI editing options, Otter’s AI transcripts for audio recording or even Google’s AI Overviews in Search, can have a human take corrective measures as and when needed. But do we?
This takes me back to an article published in Nature earlier this year, which talked about how AI tools can often give its users a false impression that they understand a concept better than they actually do. One, willingly or out of a limited skill set and understanding, takes the other to walk down the same path blissfully.
“People use it even though the tool delivers mistakes. One lawyer was slammed by a judge after he submitted a brief to the court that contained legal citations ChatGPT had completely fabricated. Students who have turned in ChatGPT-generated essays have been caught because the papers were ‘really well-written wrong’. We know that generative AI tools are not perfect in their current iterations. More people are beginning to understand the risks,” wrote Ayanna Howard, who is dean of the College of Engineering at Ohio State University, for the MIT Sloan Management Review, earlier this year.
The examples she references are of Manhattan lawyer Steven A. Schwartz and students from Furman University and Northern Michigan University. That puts the spotlight on the more liberal usage of generative AI tools, such as chatbots and image generators, which most people tend to use without further due diligence or research on the output that’s been provided. AI has been wrong on more than one occasion.
The funny thing is, more and more humans are realising that AI isn’t always right. Equally, human intelligence doesn’t seem to be identifying and correcting these mistakes as often as it should. You’d have expected the lawyer and those students who were mentioned in Howard’s illustration, to have done so. Those are specific, specialised use cases. Yet, humans in that sequence took the core tenets of a typical AI pitch too seriously — human-level intelligence and saving time.
For tech companies showcasing new platforms, updates or new products, there is of course pressure from more than one dimension. They’ve to be seen keeping pace with competition and surpassing it. Apple’s had to do it, even though not everyone who’s bought their latest iPhones, still has the Apple Intelligence suite. Google’s had to do it, and Gemini is now finding deeper integration in more phones once the Samsung exclusivity period is done. Microsoft is betting big on OpenAI, which is why any upheaval that the latter, has become a cause of concern at Redmond too.
Also, they’ve to be seen talking about all things cutting-edge, which helps stock prices (well, mostly) and keeps investors happy. I spoke about Adobe’s extensive AI pitch. Their landscape includes rising competition from Canva which has its own smart AI implementation bearing fruit (expect the recent Leonardo.ai acquisition to result in new tools), competition from tools that do specific things, and investors would still remember the $20 billion acquisition of Figma that was abandoned late last year.
None of this is easy. Therefore, the next question to be asked of generative AI is — can AI solve the mess AI is creating? Unlikely.
Vishal Mathur is the technology editor for Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice-versa. The views expressed are personal.

en_USEnglish