Ethical dilemma with OpenAI’s next-gen AI model

The new kid on the block of generative artificial intelligence, OpenAI, has laid claim to a bold attempt to develop its next AI model using a team of fewer than 10 people in a move that marks a major shift in how cutting-edge technology is developed and governed globally.
While this initiative speaks to the major strides in engineering efficiency, and the sheer ingenuity behind AI, questions abound. Apart from the obvious, like where should AI take the lead, and where do human beings add the most value, the ethical considerations now stand out the most in their new deal.
Beneath that veneer is a deeper concern touching not only on transparency, but also power and the ethics of what we can call concentrated influence over a technology that millions, if not billions, will rely on to interact with knowledge and the world from the comfort of a phone or computer.
At a glance, reducing the headcount reflects a laser-sharp efficiency and a belief in the capabilities of elite talent. It also mirrors the startup ethos of focused teams that have stripped away bureaucracy in favour of speed and innovation.
Where modelling required huge teams of researchers, engineers, and data specialists, we now stand on the cusp of seeing major efforts by a handful of minds.
Much as the idea that fewer than 10 people could build a model with global influence speaks to the maturity of AI development tools, and possibly even the automation of some parts of the model-building process itself, this excitement is not without its caveats.
That such a small group could build something so powerful reveals the troubling concentration of influence. Traditionally, the dissemination of knowledge has been distributed across institutions, publishers, and editorial boards.
This has traditionally included peer reviews, public debates, and varying perspectives from specialists in particular sectors. Take a language model today. This is a responsive encyclopedia, a research assistant, a tutor, and even a co-creator. However, the ability to shape its personality, knowledge boundaries, and ethical compass lies in the hands of those who design it.
This must therefore raise concerns that go beyond the technical. How can the perspectives, biases, and worldviews of less than a dozen people not end up becoming imprinted on the model itself? Even with safeguards, the architecture, the data it was trained on, the prioritisation of certain tasks or capabilities, will all originate from decisions made in a very tight, opaque circle. Suppose such an AI model is deployed at scale and begins to generate harmful outputs, restricts access to certain ideas, tracing the origin of those decisions back to a handful of individuals becomes a chilling exercise.
What happens when such a model becomes a reference for students, journalists, policymakers, or creators? What happens when it reinforces a particular historical interpretation or omits emerging scientific consensus? This is not being malicious but rather concerns that come with real-world consequences.
As AI models get more powerful, the assumption might be that more people should be involved in ensuring their safety, fairness, and alignment with broad human values. OpenAI’s early commitment to openness has evolved into a more cautious, security-minded stance. But this now sets the stage for a future where only a select few have the keys to powerful systems, while the rest of the world watches from the sidelines, dependent on their judgment and values.
This isn’t to say that innovation should be stifled, or that small teams are inherently flawed. Some of the greatest breakthroughs in technology and science have come from small, tightly knit teams driven by a singular vision. But the stakes today are fundamentally different. The technologies being built are not just tools – they are systems of knowledge, decision-making, and even belief-shaping. They are embedded in the apps we use, the recommendations we follow, and the decisions we make. When fewer people hold the steering wheel, the consequences of each turn become far more significant. This is just to raise firewalls and ask questions about what if. There is life beyond the technicals that must make stakeholders smell the coffee before it is served.
— The writer is People Daily’s Business Editor