Request your discovery subscription!

Print
News Tank Academic works with paid access, please respect intellectual property and do not transfer this article without permission from News Tank Academic.

Implementation of AI: "The biggest challenge academic institutions face is the lack of clear policy"

News Tank Academic - Paris - In-depth interviews  #386483 - Published on
- +
©  everythingpossible - Adobe Stock
©  everythingpossible - Adobe Stock

"The biggest challenge institutions face in effectively integrating AI Artificial intelligence is the lack of clear policy. It is understandable because technology is rapidly changing so government regulations are sometimes vague", declares Claudio Colaiacomo Vice President Public Affairs and Academic Relations @ Elsevier
, Vice President of Global Academic Relations at Elsevier BV, in an interview with News Tank on 30/01/20.

He comes back on the Elsevier report titled ‘Insights 2024: Attitude towards AI’, released in the summer of 2024 and which required several months to survey recipients from across the globe.

If the implementation of AI is not universal yet, "the opportunities are endless, but many of them are not clear yet", according to Claudio Colaiacomo.

"Accelerating research productivity and discovery is perhaps the most important one because AI can take up complex and lengthy tasks. Its ability to explore a vast corpus or library of data and content and identify specific information is a game changer."

It also entails opportunities in the educational and administrative areas, although ethical questions remain when used to handle data or participate in research. Claudio Colaiacomo recommends that institutions, "while being rightfully cautious, should go outside their comfort zone".

Ahead of the AI Summit in Paris on 10 and 11/02/2025, Elsevier's Vice President of Global Academic Relations goes through challenges, benefits and risks surrounding the implementation of AI.

Elsevier is a Dutch scientific publisher and data analytics company specialising in scientific, technical, and medical content. It was founded in 1880 and covers over 170 countries and regions.


Lack of clear policy: "Governments look to put boundaries around AI and address fears and concerns"

The report highlights a strong awareness of AI but varying levels of implementation. How do you interpret this gap between awareness and practical application of AI within academia?

AI is known by everybody, and it is evolving very fast. Now we have debates about ChatGPT and other technologies from China, for example. The media also talk about it and competing companies like Google and Microsoft have issued their versions of generative AI. There will also be several discussion tables at the AI Summit in Paris on 10 and 11/02/2025.

Within research institutions, the level of knowledge is even higher because some researchers have been using AI for a decade, as the technology has been here for a long time, even if the general public has known about generative AI only recently.

Sometimes institutions, governments or the EU don’t have clear policies about AI. »

Companies like Elsevier have been able to leverage gen AI to better assist researchers in their day-to-day work with platforms like Scopus AI, which allows researchers to rapidly identify research and sift through Elsevier’s vast database of content and complete tasks in minutes rather than hours or days.

There are concerns about the ethical use of AI or training on AI, which is not always available. There is also often a lack of clarity about what can be used or not. Sometimes institutions, governments or the EU European Union don’t have clear policies about AI. That is rapidly changing as governments look to put boundaries around AI and address fears and concerns. There is also a cost element, which could explain the gap.

In your view, what are the main challenges institutions face in effectively integrating AI?

The biggest challenge is the lack of a clear policy. It is understandable because technology is rapidly changing so government regulations are sometimes vague. Institutions that implement AI effectively should implement clear Responsible AI policies.

From the report, we know that almost half of the researchers interviewed were not sure of their institution’s policy on AI. And in a world where ethical aspects and the quality of data are important, we need clear regulations. At this point, there is mistrust on the ethical implementation of AI and I think it is blowing adoption and maybe even cultivating a climate of resistance in universities.

"Once the ethical and financial base is laid out, institutions can focus on research, education and administration"

Beyond the challenges, what are the most promising opportunities that AI could bring to research, education, and the administration of academic institutions?

The opportunities are endless, but many of them are not clear yet. Accelerating research productivity and discovery is perhaps the most important one because AI can take up complex and lengthy tasks. Its ability to explore a vast corpus or library of data and content and identify specific information is a game changer.

It can even help researchers identify collaborators and partners who have worked on similar research or have a similar interest or field of study. By stimulating cross-disciplinary collaboration, it enhances research quality.

Companies are becoming better at harnessing AI and technology to identify and prevent biases. »

In the education area, AI offers a way to create new teaching models and methods and customize learning, it can answer conversational questions and simplify complex sets of information and ideas while providing actual sources.

University staff have to manage a large quantity of data and AI can accelerate the identification and efficiency to use it. But the ethical item becomes important, especially with AI working on a database of personal or sensible information. Biases are always around the corner. However, companies are becoming better at harnessing AI and technology to identify and prevent biases.

If you were to advise academic institutions, what are the main areas they should focus on in their AI strategies?

I would suggest having a clear, shared vision on the leadership board. Create a set of clear responsible AI principles. A president of university should make sure the governing board is aligned with their vision. Based on this, they could implement clear policies, compliance regulations and ethical oversight.

Institutions should also invest in training, tools and infrastructures. Implementing AI does not come for free, but once the ethical and financial base is laid out, institutions can focus on research, education and administration. They will see the investment is worth it.

What kind of training should academics and researchers follow to safely and effectively implement AI in their work?

Researchers need to understand the opportunities and ethical implications; this includes understanding the technical basis of AI. They should see that it is a resource to complement human intelligence, not substitute it, and institutions should make sure it is understood.

It is also important to train users on validating the outputs and inputs. Research leaders should stimulate a culture of critical questions about AI results, so the human factor becomes key and we can’t put it to the side simply because AI is faster.

"If you are too cautious, you risk being left behind by others who embrace the change and advancement"

What other risks could there be in working with AI and how to avoid them?

The risks are many and they are changing day by day. Mostly the lack of trust and governance, leading to poor policy implementation and risks like data privacy, institutions being over-exposed to technologies and intellectual property concerns.

While being rightfully cautious, institutions should go outside their comfort zone. If you are too cautious, you risk being left behind by others who embrace change and advancement - which you can do cautiously.

Another risk is linked to the visibility and use of critical research data by other nations. This is the topic of scientific sovereignty, which is often unclear, especially when it is trusted in policies and regulations. Universities could unknowingly expose sensitive data to the world with AI systems not well crafted or managed and not created or ruled by responsible principles and policies.

This is why peer review and human oversight are so important. »

An institution may be aware of the risks of bias but not be clear on how this bias is propagated, and it may end up having a big impact. Finally, there is the risk of plagiarism: AI may be using sources without proper citations or proper use of copyrights and it may put an institution at risk of violating academic standards.

With time, AI can help us identify these violations more easily but it is why peer review and human oversight are so important. Currently, mistakes can still get through but as we identify them, it gets us closer to solving the issue.

How do we evaluate the benefit/risk balance of working with AI for institutions?

The benefits fall into three categories: research capabilities, improved learning and improved efficiency. The risks are bias, cost, privacy and misuse of AI. The ability to assess the balance comes from a structural approach to AI adoption. This is based on clear guidelines and strategic goals.

By doing this an institution can develop indicators and how to measure the risks. You cannot drive an implementation of a technology if you don’t know exactly where it fits in your strategy and how to measure if you are going in the right direction.

Variations in adoption of AI: Regions, job positions, gender and financial status as factors

What are the main variations in AI adoption and sentiment across different regions?

There are clear differences between different parts of the world but AI is global: 96% of people we’ve interviewed were familiar with it. In the Apac region, particularly in China, we see a higher adoption of AI in work streams. They embrace AI with limited scepticism, especially on ethical concerns, whereas Europe and the US United States worry a lot, they try to define AI regulations and to be fair so they are slower.

There is an item of budget availability, rich nations tend to adopt AI faster. There is also a gender perspective: men are almost double as likely to have a positive sentiment about AI, while women are more cautious and want to evaluate risks and implications.

What are the main differences between academic leaders and researchers towards AI adoption?

We can see some differences, mostly because they have different perspectives of research: academic leaders are concerned by regulations and performance, what the government is doing, as their job is strongly influenced by it, especially in Europe, ethical concerns and the risks AI may bring to their institutions.

While researchers show an inclination to experiment with AI, for example for data analysis, they worry a lot about biases because if AI is not giving the right answer, the blame is on them.

"Universities should play a role in shaping policy or regulations on AI"

How should publishers develop guidelines for the ethical use of AI in peer review?

The publishing community is looking closely at the development of AI. It is important to adopt and implement clear guidelines in peer review. Since peer review is the foundation of how research is conducted, researchers and publishers are very cautious to preserve integrity and confidentiality.

Elsevier has developed clear guidelines for reviewers to safeguard confidentiality, fairness and data security. For example, reviewers are prohibited from using AI to assess manuscripts, using AI to write content must be declared at the start of the submission process. Guidelines for reviewers are part of a broader commitment to preserve integrity and are a top priority.

How can institutions and publishers foster a culture of responsible AI use?

Institutions can introduce regulations and help the government build a national framework. Universities should play a role in shaping policy or regulations on AI, as they have the competencies and are the producers of the science behind AI. They are responsible for informing students and future researchers about the risks and benefits of AI.

Publishers also have a role; it’s a twofold approach: one towards employees (internal and external ethical standards) and the scientific community and the publishing process. It’s important to build an agile ecosystem around this because what we discuss now won't be the same in a few months or a year. AI is maturing rapidly and we have to mature with it.

Claudio Colaiacomo


Visit in the directory

Career

Elsevier
Vice President Public Affairs and Academic Relations
MIB Trieste School of Management
Lecturer of Mindfulness within the Executive MBA programme
Reed Elsevier
Vice President
Elsevier
Director of Sales
Elsevier
Senior Account Manager
Planar Systems
Technical Sales Manager
Planar Systems
Vice General Manager
The College of New Jersey
Teaching/Research assistant

Studies & Diploma

Università di Pisa
Masters, Neuroscience and Contemplative Practices
Mindproject
Mindfulness based Counseling
MIB Trieste School of Management
Master of Business Administration (MBA), Business Administration and Management, General
Technische Universität Wien
PhD, Solid State Physics
Stevens Institute of Technology
Masters of Science, Engineering Physics
The College of New Jersey
Bachelor of Science, Physics

# 53386, created on 04/02/25 at 17:40 - Updated on 04/02/25 at 17:54


© News Tank Academic - 2025 - French copyright law: "Infringement of copyright (...) is punishable by three years imprisonment and a €300,000 fine. Infringement consists of all forms of reproduction, display or circulation of any intellectual work, in any medium, in violation of the rights of the author."

©  everythingpossible - Adobe Stock
©  everythingpossible - Adobe Stock