“Having our own basic AI model is crucial because our demographics are different”

Dr. Ajay Sood was one of the most important physicists at the Indian Institute of Science in India before being appointed as India’s third major scientific consultant in 2022. He explained Hindu India is in the field of artificial intelligence and quantum computers, and why the world is at the forefront of major technological changes. extract:

How do you view the emergence of DeepSeek? Should India worry about whether we have the resources to catch up?

DeepSeek is a wake-up call, of course, but it may be more for Americans than for us. I found it to say that it is developed at $5 million, and probably more. Recent reports seem to prove this, and in other cases they have assembled 15,000 to 20,000 GPUs (graphics processing units) and are using it. However, making them work in parallel involves innovation. Instead of executing 660 billion parameters, they operated on the industry side and had ways to connect them in parallel. That is a breakthrough. Although I must add that others have been using this method as well. Even in India. In our AI tasks, we solved several problems. We have the computing resources, and this task will develop the necessary basic AI models. You need data centers where all of these computers are available. The government has established computing facilities for the private sector and raised tenders for the private sector. The government will become the buyer. The government may not need everything, but what it wants. Therefore, 18,000 GPUs have been planned. These computing facilities are established in India with seven to eight private sector participants. To develop a basic large language model (LLM), you need data sets to train your model. How would you train without computing facilities? This is the problem being solved now.

India now announces AI tasks and decides how to have its own underlying AI model, it seems we are in a reactive mode. Is it crucial to develop your own model in India, rather than adjusting what is available to custom applications?

Having a basic model is a must. In fact, we decided on the AI ​​task in 2019 in the PM Science, Technology, Innovation Advisory Committee (PM-STIAC), which is a member of my. But we lost to Covid-19 and we lost two and a half years. I won’t say we’re chasing AI.

Debate whether we must have our own basic model or adopt an open source model and reconfiguration. We have our own requirements. We will have our own use cases. Our population is very different. Our diversity is very different. All of this will be needed when you want to train your model. Open AI models will not receive our cultural training. But we have to do it at the same time.

But do you really see that our own underlying model leads to our economy, and Jobs wins correspondingly?

The answer is yes. If you don’t, you will remain in service mode. You will never make breakthroughs in technology development. We have reached a level, but this is not the end of the world. You can’t just parachute on something and say, “Where you are, I’m going to start working and build on that.” It won’t be like this in technology and science.

As a scientist, do you think the development of AI is usually beneficial to humans?

My answer is yes. We need to think of it as something that supports rather than replaces. If AI can automate routine flows, then our employees can start from there. There is nothing wrong.

AI seems to be of different quality than software because it can often come up with results that people cannot always understand.

I agree, but that’s why you need explainable AI and people are developing it now. If I get the answer, I should tell me where the guidance AI comes from. The Ministry of Electronics and Information Technology has provided a report on this, which has been available for public consultation until February 28. It has taken care of all these things.

Recently, Microsoft claims to successfully develop Majorana 1, the first quantum chip powered by topological core architecture. How important is this – especially when quantum computers still haven’t solved meaningful problems in the real world?

Topology scale [a way to store quantum information that could lead to more powerful computers] Knowing theoretically. They have shown a 8 QUITION machine. If that succeeds, then the expansion will be faster. Topology protects quantum yards from defects and interference, making them stronger. This is all the point of Toto conductors, and it is a time of victory. Thirty years ago, it was basic science, which seemed so hypothetical and impossible, and today is reality. I think this is great.

Do you think that every job in quantum computers should pursue this scientific direction? Should India work on Majorana too?

There are a number of groups in India working. We are not absolutely zero. The problem is they have to bring it into technology. A team of Microsoft’s theoretical physicists, computer and materials scientists have worked on this for 15 years. But even if all the hype is on superconducting quantum devices, they haven’t given up [a parallel approach to quantum computing]that’s the point you have to see. They know they are on the right path. We should not rush to condemn or throw things. Therefore, it is not to say that superconducting quantum will go out. We have to see because in quantum computing, we still don’t know which model will win. Then the next paradigm is quantum AI. This could mean using quantum computers to train AI models, and I can’t even imagine what this would mean. Perhaps a completely new way to understand models and training. That is the beauty of this field. We are at the forefront of a new era.

publishing – March 9, 2025 at 10:03 pm

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *