What Actually Counts as Interdisciplinary Research?
A data-driven look at which funders support knowledge integration
Interdisciplinary research is important. We believe this not because we have a weight of copper-bottomed evidence to support the claim,1 but perhaps because it feels intuitively true.
I've been thinking about this while reading Richard Hamming's book, which is a repository of amazing insights into the textured experience of working in scientific research. Hamming famously talked about the value of “open doors”:
Working with one’s door closed lets you get more work done per year than if you had an open door, but I have observed repeatedly that later those with the closed doors, while working just as hard as others, seem to work on slightly the wrong problems, while those who have let their door stay open get less work done but tend to work on the right problems! I cannot prove the cause-and-effect relationship; I can only observe the correlation.
This quote resonated with me, because it captures something we all recognise about how our institutions of knowledge actually work.
Most researchers are trained along what David Guest calls T-shaped career paths.2 This means gaining a broad grounding in the literature and methods of a field (the horizontal bar), then a deep probe into increasingly esoteric knowledge formation (the vertical spike).
The T-shaped career is a sensible and easily replicable model that provably produces serious scholarship and builds genuine expertise.
But what are the connections between all these verticals? A office with an open door? A department, faculty, or an entire university? A journal editorial board or an academic conference?
And what happens when the questions we care about – particularly the big and messy ones – don't sit neatly within a single T or collection of similar Ts? What happens when keeping the door open isn’t enough?
That's where interdisciplinarity is supposed to come in. But the problem is that different disciplines don't just use different methods – they often have completely different ideas about what truth looks like. What counts as evidence? What counts as success? Try getting a clinical trials researcher and a design ethnographer to agree on the word "data" and you'll see what I mean.
So we end up with a curious tension. Our system trains people to dig ever-deeper wells when society is crying out for aqueducts.
The hard truth is that interdisciplinary research is genuinely difficult to do – it’s hard to fund, hard to undertake, hard to assess, and hard to publish. So what passes for interdisciplinarity might be merely institutionally performative: the talking shops and coffee mornings disguised as ‘embedding strategic themes’, or the funding application that we claim is interdisciplinary because it superficially involves someone from another faculty.
The questions that have been nagging at me are: what does interdisciplinary actually mean? What does good interdisciplinary research actually look like in practice? And if we look at the actual corpus of research, can we tell the difference between genuine knowledge integration and institutional box-ticking?
Moving beyond shibboleths
So what do we mean about interdisciplinarity?
Perhaps we know! After all, it is certainly an important and widely accepted idea! Every university has interdisciplinary themes. Every funding agency has interdisciplinary calls. Every major societal challenge apparently requires interdisciplinary solutions.
In certain academic circles, incanting the term has become a kind of entry requirement for serious discussions about policy and strategy.
But ask what it actually means and you get institutional answers. Cross-faculty collaboration. Mixed review panels. Joint supervision. Team science.3 All of this might be very valuable, but it tells us nothing about whether the research itself is genuinely connecting different knowledge domains together.
If there is a performative element to interdisciplinarity, then I believe it’s well-intentioned at least. At its heart may be a sense of duty to honour the social contract of research: our shared sense that publicly funded science ought to be tackling problems that matter in the real world – many of which do not fit into our internal structures. In this way, interdisciplinarity can serve as a sort of "impact wrapping": a signal that our research agendas are connected to broader purposes.4
That isn’t bad or wrong per se, but there’s another very different way of looking at all of this. In discussions about the "edginess" or “novelty” of research, and particularly the scope for greater support for “disruptive” research, interdisciplinary research often gets positioned as the opposite of boring “mainstream” research. Where most research is supposedly “conservative” or “incremental”, interdisciplinary work can be entrepreneurial and boundary-pushing.
This is heady stuff, if true, and it might be uncomfortably Dionysian for some. But if we probe this intuition, we might indeed uncover more concrete notions of interdisciplinarity – where novel combinations of research activity, including through radically new institutional forms, can and do lead to breakthroughs at the intersection of fields.
After all, some of the most exciting recent advances have come from genuine integration across disciplines. AlphaFold is the obvious example: real breakthroughs at the intersection of biology and computer science, making advances in both fields.
But how much of what gets labelled "interdisciplinary" is actually like that?
Measuring what actually moves
If we can't define interdisciplinarity structurally, we're stuck with institutional proxies and good intentions. So I decided to approach this differently: what if we looked at the actual structure of research and tried to identify papers that are functionally interdisciplinary?
The approach I developed models the research literature as a giant citation network – papers as nodes, citations as edges – and then asks: which papers are acting as connectors between otherwise separate communities?
The first step is identifying those communities. On a sample of OpenAlex publications metadata, I used the Leiden algorithm to partition the network into clusters based on citation patterns. This gives us a data-driven map of the intellectual landscape: which papers cite which others, which groups tend to be internally connected, and which bits of the landscape are more isolated.
Then I built what I call a Bridging Score to identify papers that connect across these boundaries. It's a composite measure that weighs four factors:
Connectivity (40%): What proportion of a paper's citations reach outside its home community? If most citations are cross-cluster, it's acting as a connector. If it's mostly reinforcing its own field, it's not.
Diversity (30%): How many different communities does it connect to? Bridging two fields is one thing; bridging five is something else entirely.
Centrality (20%): How often does this paper sit on the shortest paths between other papers in the network? This captures whether it's playing a "broker" role in knowledge flow.5
Impact (10%): A small weighting for citation count – partly as a quality filter, and partly because if nobody's building on it, it's probably not doing much bridging in practice.
The result is a metric that identifies papers that aren't just interdisciplinary by intention or label, but functionally interdisciplinary in the structure of knowledge flow. In other words, papers with a high bridging score are moving ideas between fields.
Now, this isn't perfect. The community detection algorithm makes choices about granularity. The weightings are arbitrary and could easily be adjusted. Citation patterns don't capture everything about intellectual influence. And I haven’t controlled for some disciplinary effects (e.g. the effect of generous funding in certain fields).
But the Bridging Score is an attempt to move the conversation from institutional definitions to structural ones – from "what do we call interdisciplinary?" to "what does knowledge integration actually look like?"
Who's funding the bridges?
I ran this analysis on about 500,000 papers, using fractional counting to handle the co-funding problem (i.e. if a paper has three funders, each gets one-third credit). The question I wanted to answer was: which funders are systematically supporting work that scores high on genuine boundary-crossing?
The results were illuminating – and I think surprising.
The first graph shows bridging performance at a funder level, where each dot is a funder in my sample (hover over each dot for more information). The further to the right a funder is, the greater the average Bridging Score of the work it funds. Bigger funders are at the top, and bigger dots support larger numbers of bridging papers (though not necessarily at a higher intensity, as you can see).
The second graph aggregates this data by funder type. (Again, hover over each dot for a breakdown).
What does this all mean?
Well, half a million papers sounds like a lot, but it’s only a small fraction of the full ~250 million-strong OpenAlex graph – and we can clearly see some sampling noise for smaller funders in the first graph. But I think we have some signals here, which can be understood as emerging insights that now need to be stress-tested at scale, with greater attention to sensitivity analysis, confounding factors, and so on.
So with those caveats in mind, here’s what I think the data tells us:
Insight 1: Mission-driven funders consistently outperform government bureaucracies.
International philanthropic foundations achieve high bridging rates (i.e. the percentage of funded papers that had a bridging score ≥ 0.5). UK medical charities come a close second. At the top end, you see funders like the Chan Zuckerberg Foundation (28.4%), Howard Hughes Medical Institute (19.7%), and Alzheimer’s Research UK (17.6%).
These are charitable organisations, but I am calling these mission-driven funders because it seems important that these organisations are focused on specific problems that happen, perhaps, to require integration across fields.
Insight 2: UK government funders lag substantially.
UK central government funders manage a bridging rate of just 5.9%. Even accounting for the broader scope of government funding, this is a horrible finding. If these results hold up across the full network, we need to draw important lessons from it, including about the vital importance of other types of funding – particularly charity and philanthropic funding – for enabling UK researchers to undertake risk-taking, boundary-crossing research.
Insight 3: Universities are unusually good at supporting interdisciplinary research from their own funds.
University internal funds score unusually highly – certainly much better than government project funding.
When universities back their own researchers' judgement about promising directions using their own limited resources, they get more boundary-crossing work than when government funding channels research through disciplinary review panels and administrative structures.
This is a striking finding for current UK policy discussions! It would appear to provide remarkable evidence for the value of quality-related research (QR) funding, and the dual support system more broadly. If universities are more willing to support interdisciplinary research themselves than the government is funding them to do directly, then we should think very differently about how the public might best support the work that will solve problems for society.
Insight 4: China is clearly doing something right.
The National Natural Science Foundation of China achieves a strikingly high bridging rate of 12.5% – over double the UK government average. I initially wondered if this could be a data artifact or sampling bias – but it doesn’t appear to be. If anything, coverage limitations probably understate the effect.
This is hard to square with commonplace narratives about Chinese research being conservative, or following rather than leading. Whatever structural features of their funding system produce this effect, they're worth understanding further. At the very least, this finding suggests that different institutional arrangements at a national level can systematically encourage different kinds of research. Though I suspect the implications could be a lot sharper than that.
The collaboration curve
I also flipped this analysis on its head – is there a relationship between the number of funders a paper recognises, and its interdisciplinarity? Grouping papers by the number of co-funders, I found a clear but non-linear relationship:
Bridging scores rise with collaboration, peak at around 7-8 co-funders, then start declining. This indicates that there’s a sweet spot for collaborative funding – enough diversity to bring together different perspectives and resources, but not so much that coordination costs and consensus-seeking start to dominate.
This would appear to have immediate implications for how we design interdisciplinary funding programmes. There is a benefit to collaboration, likely naturally arising at the level of projects, rather than funder-to-funder. But building ever-larger partnerships and consortia may be counterproductive beyond a certain point. Perhaps very large collaborative efforts introduce bureaucratic overheads that actually reduce the risk-taking and boundary-crossing we think we're encouraging.
We must take these particular findings with a pinch of salt! The funder attribution metadata is patchy, and while I’m confident in the individual funder attribution, this ‘counting up’ is likely weakened by poor metadata coverage. But it’s interesting to see confirmation in the data that funder diversity and plurality seems to give rise to interdisciplinarity (up to a point). If nothing else, it reinforces the broader point that structure matters more than good intentions.
What it means for how we think about research
These emerging patterns reveal something important about the gap between rhetoric and reality in research funding. We've built systems that talk interdisciplinarity but structure around disciplinary boundaries.
Despite integration within a single UKRI umbrella, UK government funding seems to remain unusually focussed on silos of activity, rather than integrative impact. UKRI is on a journey here (with very good early steps being taken), but it’s a bit of a shame we’re not further on.
More broadly, many researchers still insist that their environments continue to reward contribution within disciplines, not between them, where depth of impact is prioritised over breadth of reach. It’s clear from this data that knowledge integration is happening – but it is happening despite the overall architecture and intent of UK government policy, not because of it.
The success of mission-driven funders suggests that problem orientation naturally produces disciplinary integration. This makes sense: when you're trying to tackle Parkinson's disease or arthritis, you don't care whether the promising approaches come from neuroscience, engineering, or computer science. You care whether they work.
The performance of university internal funds suggests that researchers know how to identify boundary-crossing opportunities when given discretion to take risks. When universities invest their own limited resources, they seem to be better at supporting work that connects fields than when government funding filters research through expert review processes. One important lesson for the UK? Telling a crisis-stricken university base to focus on ever-decreasing circles of ‘specialisation’ is a path to impoverishing us all.
The Chinese pattern suggests that different funding structures and environments might systematically be encouraging different kinds of research behaviour. Of all of the findings, this is the one that merits further study outside of my methodology, as it suggests that the geopolitical implications of funding structure choices may be more profound than we’ve yet recognised.
Conclusion
Not all interdisciplinarity is good. Much of it is probably performative. But some is structurally real: papers that genuinely move knowledge between otherwise disconnected communities.
I return to AlphaFold as a test case. It's genuinely interdisciplinary in the sense that most matters: work that makes genuine advances in multiple fields, not just applications of one field's methods to another's problems. It required mission focus, technical depth, and the kind of institutional support that doesn't fit neatly into traditional disciplinary categories.
If we think that kind of knowledge integration matters for complex societal challenges, for scientific breakthroughs, or for system efficiency, then we need to understand what produces it.
The Bridging Score is one approach to making this distinction. The institutional patterns it reveals suggest we have more to learn about how funding structures shape knowledge production. The emerging data shows that mission-driven funders, flexible university funds, and problem-oriented institutions outperform disciplinary-facing bureaucracies at fostering genuinely interdisciplinary research.
This is a system feature we could design for, if we were serious about wanting the real thing.
Let’s leave the door open to that!
Acknowledgements: I am grateful to Ian Chapman, Ben Steyn, Stian Westlake, Christopher Smith, Helen Cross, Laura Ryan, Sam Roseveare, Pedro Serôdio, Sam Currie and Jim Coe for being enthusiastic about this work and/or encouraging me to share this with you all. All mistakes are entirely my own (and there will be some!).
The quantitative evidence is mixed on the topic. A 2009 meta-analysis of 8,000+ papers found no uniform citation advantage for interdisciplinarity across all fields; a 2019 study showed a clear citation advantage across 8 natural science fields.
"The hunt is on for the Renaissance Man of computing," in The Independent, September 17, 1991
I use the terms science and research interchangeably here, with apologies to the SHAPE disciplines.
Speaking of broader purposes, can I mischievously observe a fairly sizeable overlap between those who champion interdisciplinarity in an institutional sense, and those who are pushing broader academic-cultural agendas? It seems interdisciplinarity may be conveniently imprecise and broad enough to provide a kind of rhetorical cover for other motives to change how universities operate, who they hire, what they prioritise, and what their social values ought to be – in ways that not everyone would see as politically neutral.
‘Betweenness centrality’, in the jargon. Assessed on a subset of pairs due to compute limitations. We handle the imprecision by weighting this component accordingly.
on note 4- I think you are probably right to observe a correlation between those who advocate for collaboration and healthier research cultures. This is because (and I set this out in my book, Research Collaboration: A step by step guide to success) many of the features of an environment that support collaboration also support healthier organisational cultures. I don't subscribe to the idea that "interdisciplinarity may be conveniently imprecise and broad enough to provide a kind of rhetorical cover for other motives", and would highlight that also correlation is not causation....I do wish we could get on from arguing about what inter/trans/multidisciplinary is though and just accept that research lies on a broad spectrum and that all of it is valid and that people may move along the spectrum at various points in their career.
There's lots in here that I could have a longer chat about over a coffee or wine!
This is an outstanding analysis.
I think your conclusions are very interesting and I would simply add a few comments.
I believe we should be putting more resources into QR. Not only are universities generally capable of directing this money effectively but are able to undertake more careful and fine-grain management of the resources. It is a more efficient method for funding, reducing the costs of more centralised grant funding schemes.
I think criticising UKRI for not leaning more towards interdisciplinary research would be somewhat unfair. It neglects the national role of public science funders, that is to sustain the overall research base and to preserve key areas and capabilities within the UK. This is a responsibility that mission-based funders do not have.
Thank you for this excellent work.