Radiological exams involve capturing detailed images of the body, typically through multiple acquisitions that explore different tissue properties and locations. The term “MRI protocol” or more general “imaging protocol” is widely used to capture the detailed collection of acquisitions to perform for a given procedure, with all the image technical parameters, the list of post-processed images, etc.

Knowing the imaging protocol performed for each exam is essential for many applications including retrospective protocol statistics assessment, protocol deviation detections, or protocol normalization across scanners. Unfortunately, and as surprising as it may be, knowing which protocol was performed for each exam is often difficult to recover. 

Systems like HL7 or DICOM most often record the “general procedure types” (e.g., “MR Brain WO”) but not the specific protocol performed (Stroke, TBI, Pediatric Tumor etc.), and what’s worse: while some protocols may be explicit, e.g. encoded in a protocol book and/or in scanner protocols programmed in the scanner console, some may be created “on-the-fly”, from memory or rule of thumb to account for patient-specific requirements.  For example: Using the default Fluid Attenuated Inversion Recovery (FLAIR) technique to detect lesions but, if the patient is young, shortening TR to ~6000ms, reducing TE to ~100ms, and adjusting the field of view and slice thickness to match the smaller head size.


Protocols as recipes

At Quantivly, we often compare protocols to recipes, in which acquisitions are ingredients and quantities are the technical parameters. My grandmother has a great banana bread recipe with flour, sugar, shortening, banana, and baking soda. She could use a recipe from the internet (explicit protocol); but she has it all in her memory (implicit protocol).  Both types of recipes (i.e. “protocols”) exist in radiology departments.


Our hypothesis: We can recover the imaging protocol performed during an exam from the collection of acquisitions performed and their technical parameters

For example, if a susceptibility-weighted imaging (SWI) technique is used, there is a high chance this is an imaging protocol for traumatic brain injury using SWI to detect microbleeds. Following our analogy with recipes: if there is flour, sugar, butter, banana and baking soda, there is a high chance this is a banana bread, with potentially some variety around it (e.g., chocolate chips, vanilla extract, oil instead of butter, etc).

Below we describe how we built a new foundational model to allow us to automatically label exams’ imaging protocols based solely on their acquisitions’ technical parameters.


From exams to graphs: a new foundational model for imaging protocol learning

A critical challenge when working with imaging exams is that the number of acquisitions varies, e.g. depending on the imaging protocol or the need for repeat images with uncollaborative patients.

Mathematically, comparing two exams requires computing a distance between them; this means computing distances between spaces of different dimensions, which is hard to fathom.  More generally, it is extremely challenging to apply any machine-learning technique because each element is in spaces of different dimensions.

We propose a novel foundational model for imaging exams that unlocks, for the first time, the development of many downstream machine-learning applications. 

In short, we represent exams as graphs in which each node is an acquisition with the node features encoding the acquisitions’ technical parameters (e.g., TE, TR, field of view, …).  We then leverage graph neural networks to automatically learn an embedding – or a latent space – that encapsulates the unique and valuable information representative of an exam. This “projection” is learned from the data, by training a graph encoder-decoder (see figure below) to reconstruct the structural properties of the graphs (i.e., the adjacency matrices) while minimizing loss using a large number and types of exams.

The resulting fixed-size embeddings offer versatile utility across various downstream applications including but not limited to deviation detection, protocol optimization, patient scheduling, standardization across scanners etc…


A true foundational model for imaging exams

Ever since the rise of large language models (LLMs) from both open and closed sources, many models have been described as “foundational models.” Most of them, however, are based on fine-tuning LLMs on domain-specific, carefully curated datasets (e.g. radiology). Others are constructed from scratch but using commonly accepted architectures and training paradigms. We see those as “domain-adapted” models rather than truly “foundational models.”

In contrast, our new model of imaging examinations explores an entirely new frontier, a new varying-size data type, by embracing the inherent complexity through a graph-based representation. The training of our model allows extracting from the data the intricate relationships between acquisitions and their technical parameters into a fixed-size representation, creating a novel mathematical space capturing exams and unlocking many downstream applications.


Exploring the model’s potential: clustering and sub-clustering capabilities

We first evaluated the ability of our model to project exams with similar protocols in nearby regions within the learned manifold, while positioning dissimilar protocols far apart. 

The figure shows the data from a single site – around 86000 exams – and their associated embeddings reduced to a 2D mapping using UMAP. In this figure, each dot is an individual exam. The embeddings were analyzed using DBSCAN, a clustering algorithm based on the distance between points in the high-dimensional embedding space, and color-coded with their respective cluster. 

We further applied sub-clustering techniques in the highlighted areas, effectively isolating unique protocols with higher granularity.

The ability of our foundational model to reveal distinct clusters representative of different protocols, solely based on the technical parameters of exams, irrespective of the series or study descriptions.


Specific Use Case: Automatic Protocol Naming

After validating the ability of our foundational model to capture imaging protocols, we tested its combination with a query system to not only regroup similar exams, but also automatically name imaging protocols based on a small number of manually labeled exams.

More precisely, we used our model as a prototypical network for few-shot learning. We use FAISS, a library for efficient similarity search, to map similarities in our embedded space between unknown exams and a support (a small, manually labeled set). The protocol label was inferred by the closest support. 

The figure below illustrates how this approach allows the distinction of protocols that differ by a single acquisition. Here we show examples of one-shot learning for each variation from a small number of manually labeled exams (colored dots and stars), showing our ability to accurately label the protocol of unknown new exams (gray circles). Stars represent each variation’s one example used as the support set within the one-shot learning system. Note that the acquisition names below were only used to add context to each cluster’s protocol, but were not used as inputs to our model.




In an effort to ease the recovery of radiological exams’ imaging protocol, we built a Foundational Model for Radiological Exams based on the hypothesis that the specific imaging protocol could be determined simply from an exams’ acquisitions and their associated technical parameters. In doing so, we discovered that the fixed-sized embeddings generated by our Foundational Model have a wide range of possible downstream applications. 

In this blog post we focus on automating the naming of protocols given a small curated data-set; here, the model serves as a projection module within a few shot learning system, naming incoming exams with the closest sample from the support set. In the future we will explore other downstream applications such as: the ability to determine exams’ compliance with credential programs, the ability to detect protocol deviations, the ability to generate highly granular protocol insights and statistics, and many more.

Join us!

Book a demo at the top of the page or visit us in person at SIIM 2024 in Startup Kiosk #9. Join us in exploring the future of healthcare operations with Quantivly’s digital twin technology. Discover how we’re leveraging this innovative approach to transform radiology departments and, ultimately, patient care. Together, we can reimagine the possibilities of healthcare delivery.