beandeau>

Call for Abstracts

Scientific autonomy under pressure: Rethinking research evaluation in the context of global power shifts, security concerns, and artificial intelligence challenges

Research evaluation is being reshaped by shifting global powers, rising security concerns, and new technological developments, such as artificial intelligence (AI). At the same time, research is under increasing public scrutiny. Cases of scientific misconduct, plagiarism, data fabrication, and the proliferation of paper mills and predatory journals have raised questions about the credibility and ethics of contemporary science.

Debates on scientific integrity are increasingly intertwined with geopolitical interests and technological change, revealing research evaluation as a highly politicised tool rather than a neutral process. Strategic funding priorities and geopolitical competition determine what counts as valuable or legitimate research. As political pressures limit international collaboration, open science and academic mobility, evaluation systems become key arenas where a variety of actors, including those who support more bottom-up approaches, negotiate research sovereignty and academic freedom.

In the context of conflicting priorities and shifting power structures in research governance, technological advancements are changing evaluation procedures in ways that both intensify and transform existing tensions. The emergence of AI-driven assessment tools, such as algorithmic indicators, automated peer review, and large-scale data analytics, is one of the most significant developments. These tools offer efficiency and comparability but also raise concerns about reliability, transparency, and fairness. When algorithms define standards of quality, there is a risk of narrowing intellectual diversity, reinforcing dominant paradigms, and marginalizing critical or unconventional scholarship.

The academic community needs to critically reassess how evaluation affects not only excellence and impact, but also individual academic freedom, institutional autonomy, ethical responsibility, inclusivity, and trust in research. Fostering a diverse and reputable scientific ecosystem requires evaluation systems to preserve autonomy while promoting integrity.

The RESSH 2026 Conference invites scholars, policymakers, research managers, and practitioners to critically examine power shifts, security concerns, and AI challenges in research evaluation. Submissions may explore these topics from conceptual, empirical, or comparative perspectives-framing AI as a tool, an object of study, or a transformative force within the research ecosystem. Contributions may include case studies, theoretical frameworks, policy analyses, and methodological innovations. Contributions may range from early-stage projects to mature research results. Topics may include, but are not limited to, the following:

  • Scientific autonomy under pressure
    • Centre-periphery dynamics and research sovereignty
    • Transforming the publication ecosystem (paper mills, open access)
    • Commercial infrastructures and dependencies in research evaluation
    • Research ethics, academic misconduct, and integrity in the digital era
    • Policy influence (evidence-based policymaking, political intervention)
  • Artificial intelligence and algorithmic evaluation
    • AI in research assessment: potentials and limitations
    • Algorithmic bias, transparency, and accountability in evaluation
    • Data governance, protection, and sovereignty in international research collaboration
    • Human oversight and responsible use of AI-based indicators
    • Predictive analytics, automation, and the future of peer review
  • Diversity, inclusion, and responsible metrics
    • Recognition of diverse research contributions and career paths
    • Equity in recognition: gender, geography, and epistemic diversity
    • Language diversity and multilingualism in research evaluation
    • Integrating responsible metrics and expert review
    • Narrative CVs and qualitative assessment methods
    • Peer review standards and practices
  • Reforming research assessment
    • Societal impact, community engagement, and co-creation of knowledge
    • Evaluating interdisciplinary and transdisciplinary research
    • Balancing qualitative and quantitative assessments
  • Beyond traditional metrics: alternatives to journal impact factors and h-index
    • Development of holistic evaluation models
    • Rethinking institutional rankings
    • Impact of rankings on researcher evaluation
    • Institutional autonomy in setting evaluation criteria
  • Towards new governance models
    • Participatory and inclusive approaches to research evaluation reform
    • Balancing national policies with institutional and disciplinary contexts
    • Global cooperation and the ethics of data sharing
    • Frameworks for resilience, creativity, and trust in research systems
Loading... Loading...