The @neurIST EU project
A (hypothetical?) episode
Imagine you have a car accident, nothing really serious, but you collide with another car, and are transported to the local hospital to make a CT scan, "just to be sure". Now, the doctor examines the scan and approaches you in a serious tone: We discovered an aneurysm in your brain, that is, a small dilation of an artery. It is not necessarily dangerous, but there is a chance it might rupture. In that case, you are likely to either die or suffer serious disability. You are deeply worried. What can you do about that?, you ask. Well, the doctor says, one can operate that, or insert a stent, but it is not clear if that will cure the problem forever. Also, operation has its own risk. As it is only a small aneurysm, I suggest to wait and just monitor it.
The clinical dilemma
This short and hopefully completely hypothetical episode describes a clinical dilemma. About 1-5% of the population will develop brain aneurysms during their live. If such aneurysms rupture, they lead to bleeding (SAH) and stroke, and very often to death or serious disability.
Fortunately, rather few of them actually rupture, but which? Today, many of them are found "by accident", as in our little episode above. This fact worsens the dilemma, because what should the surgeon or radiologist do in such a circumstance? Not treating runs the risk of eventual rupture, treating raises the question which treatment has the best long-term prospects and the least risk (and it is likely to be unnecessary). These questions are highly controversial in the medical community.
Integrating information
Now, it is not true that nothing is known about aneurysms. Indeed, a lot of information is scattered throughout the literature. But this has not been brought together yet to inform the clinician, nor is it likely to be sufficient to improve the current situation.
At this point, enter @neurIST. This large EU funded research project has set itself the goal to improve the management of brain aneurysms by integrating the disparate sources of information, and by unearthing new relationships between rupture risk and clinical indicators of the individual patient.
@neurIST has many facets. Here, we concentrate on the path that investigated biomechanical analysis as a means to learn more about individual aneurysms.
In search of biomechanical risk indicators
The direct cause for a rupture is clearly a mechanical one, namely stresses exceeding the strength of the aneurysm wall. Thus, it is natural to expect risk indicators to be found among the biomechanical properties of the aneurysm and its surroundings (and indeed, there is evidence from the research literature that such indicators can be defined). The @neurIST project set out to investigate a large number of candidate risk indicators (such as maximum wall shear stress) on several hundreds of cases, a number far exceeding those underlying all other studies published so far.
The project looked for those indicators using three modes of analysis:
- First, purely geometric characterisations, like volume or aspect ratio,
- second, characterisations of blood flow through the aneurysm and nearby vasculature,
- and finally, aneurysm wall mechanics.
Challenges on the way
It turned out to be a formidable task. Not only was the sheer number of cases (and the corresponding amount of work) unprecedented, but the intended use to find statistical risk associations meant that the output for these many analyses had to be without bias — despite a good dozen human operators, data coming from several different clinical centres using different scanning hardware and protocols and a long chain of individual processing steps ranging from image segmentation to geometry processing, problem setup, solution, result extraction and finally doing the statistical analysis.
Moreover, before any analysis could even start, the project had to take fundamental decisions on the details of the modeling, like whether to use moving or fixed walls for the blood flow analyis. But the inherent incertitudes related to many of these decisions lead to another challenge, namely, what if later insight would suggest the revision of some of these decisions? How should one organise the data generated by manual processing & simulation in a way that would
- make later revisions of those modeling choices practical,
- permit different modeling choices to co-exist (without putting the statistical evaluation into jeopardy) and
- provide a clear provenance or audit trail of the results?
Strategies
@neurIST found a number of approaches to tackle these challenges:
- Firstly, there was an iterative process sorting out the fundamental modelling choices.
- Secondly, the project developed a highly automated tool chain to perform the necessary steps, from image segmentation to extraction of characterisations.
- Thirdly, @neurIST took a fairly abstract view of the data and processes to define a model of the data and implement a data base storage of the results, to expose them for broad uses and unanticipated reuses, and automatic (re-)processing.