ABSTRACT

Abstract

This paper examines how engineering education scholars have employed quantitative metrics to analyze curricula. This work is situated within curricular analytics, an approach that employs network analysis to measure "curricular complexity" and its associations with outcomes, including retention and program quality. Since the introduction of this framework, researchers have proposed additional metrics to capture different dimensions of complexity. However, no effort has been made to consolidate these metrics in a single resource that highlights the suite of analytical options available to the community. To address this gap, we conducted a scoping review, beginning with foundational articles on curricular analytics and purposefully sampling papers that cited these works. Our guiding research question was: What metrics do researchers use to quantify the complexity of curricula? Of the 174 papers identified, 65 met our inclusion criteria after duplicates were removed. Through these studies, we identified 23 unique metrics, which we classified into structural and instructional complexity across three levels of analysis. We aim for this catalog of metrics to serve as a reference for the basics of curricular analytic metrics and as a practical tool to support curriculum design and optimization.

Key words: curriculum analysis, curriculum complexity, graph theory, metrics, network analysis, quantitative methods

INTRODUCTION

Curricular analytics, first encapsulated by the concept of curricular efficiency (Wigdahl et al., 2014), is an emerging quantitative framework in engineering education that leverages publicly available curriculum information and student course-taking data to assess academic programs for various purposes (e.g., as evidence of continuous improvement for reports to the Accreditation Board for Engineering and Technology [ABET]). While broader conceptions of curricular analytics might include both qualitative and quantitative evaluation of course- and program-level information within a given curriculum, we adopt Hilliger et al.'s definition (Hilliger et al., 2022), which is quantitatively-oriented, that curricular analytics is "the collection, analysis, and visualization of program- and course-level data, such as program structure and course grading, aiming to inform curriculum renewal strategies at a program level". Rather than viewing curricula solely as course lists or static plans of study, the curricular analytics framework treats them as systems whose structure can be analyzed and optimized. Within curricular analytics, we model programs as directed graphs, where vertices represent courses and edges denote prerequisite or corequisite relationships. This representation enables researchers to examine the curriculum's structural properties, such as sequencing and dependencies, quantitatively, which can be decomposed into different design patterns (Heileman et al., 2017). The framework examines curricular complexity through two key dimensions: structural complexity (i.e., how the curriculum is organized) and instructional complexity ( i.e., how courses are taught and supported; Heileman et al., 2018).

The structural complexity is perhaps the most straightforward component of the framework to explore, as it only requires program information. To calculate structural complexity, we compute two quantities for each course in the network, the blocking factor and delay factor. The blocking factor is the number of courses a student would be unable to take if they fail a particular course in the plan of study, whereas the delay factor is the longest prerequisite chain to which the course belongs. When added together, we obtain the course's cruciality—higher values indicate the course's relatively greater importance to the curriculum, as proxied by the number of requirements it is directly or indirectly associated with. Adding all the crucialities together yields the structural complexity, a summary score that represents the program's overall complexity.

In engineering pathways, curriculum misalignment is a significant barrier to student success; hence the focus on quantifying the interconnectedness of course requirements. For example, despite transfer students making up a significant portion of engineering graduates, they often face challenges due to mismatches in prerequisites and inefficient course sequences, resulting in a longer time-to-degree (Starkey, 2021). The structure is only one aspect of the curriculum, however. As such, the instructional complexity is intended to encompass teaching-related factors, such as instructor quality, course availability, and student support (Heileman et al., 2018). Currently, instructional complexity is proxied by individual course pass rates.

How does this conceptualization of curricular analytics fit into broader conversations in the literature? De Silva et al. (2024) conducted a systematic literature review of how curricula have been analyzed in higher education, synthesizing 59 studies to examine the types of solutions proposed, their applications, and their maturity levels. Their findings revealed that most studies focused on program-level curriculum analysis, often aiming to understand curriculum structures rather than to implement optimization strategies directly. While a few studies focused on curriculum refinement, the majority remained at preliminary stages of development or implementation, underscoring the need for more actionable, stakeholder-informed approaches in curricular analytics. Considering a core goal of curricular analytics is to improve students' academic outcomes, Zulkifli (2019) conducted a systematic review of predictive models used in higher education to assess student performance. The review found that most models employed classification techniques and relied heavily on student demographic and academic process variables. While these models offer valuable insights into student outcomes, they typically focus on individual-level predictors rather than program-level structures. Moreover, Li et al. (2023) conducted a synthetic literature review on the role of analytics in curriculum improvement across both course and program levels. At the program level, curriculum analytics address broader issues, such as assessing student competencies, identifying curriculum patterns, and ensuring alignment with institutional guidelines using data on grades, course registrations, and program structures. However, the review highlights that, despite growing interest in curricular analytics tools, there is limited guidance on their practical implementation and on their impact on curriculum design and student outcomes. This highlights an opportunity to complement both approaches within curricular analytics, which incorporates the structural features of academic programs—such as course dependencies, pathway constraints, and bottlenecks—into models of student success.

As Hilliger et al. (2020) note, it remains unclear how curricular analytics tools are used to support curriculum improvement in real-world institutional contexts. This underscores the need for a focused review of existing techniques—not only to map the range of metrics developed but also to provide guidance on their application. Although numerous metrics exist to analyze curriculum structures, most studies employ only a limited subset, often with inconsistent operationalizations and varying applications. A consolidated review can therefore help surface promising practices, clarify gaps, and advance the field toward more effective curriculum design and optimization.

RESEARCH AIMS

This paper examines how researchers use quantitative methods to explore curricular structures in engineering. Considerable interest has emerged in quantifying curricula to explore how design choices affect dimensions of student success, with curricular analytics gaining particular prominence. Curricular analytics has focused on treating the curriculum as a network and calculating various graph-theoretic measures, as well as mapping learning outcomes to external forces, such as ABET student outcomes, for program-level assessment. However, these metrics can vary in scope, valuing different latent characteristics of the curriculum, such as course topics, sequencing, interconnectedness, or course difficulty. This paper aims to synthesize current work on how researchers have operationalized the curriculum for measurement using curricular analytics, and to relate these properties to student outcomes. By synthesizing existing knowledge and identifying areas for future research, this work addresses the following research question: What metrics do researchers use to quantify the complexity of engineering curricula?

METHODS

To explore the use of data-driven methods for analyzing curricula in engineering education research, we conducted a scoping review. This approach was selected because it enables a comprehensive examination of how key concepts are defined, operationalized, and applied across diverse contexts. Unlike systematic reviews, which focus on evaluating effectiveness, scoping reviews are particularly useful for mapping the range and characteristics of existing literature in emerging areas (Grant & Booth, 2009; Munn et al., 2018). We identified relevant studies using a forward citation chaining method, starting from foundational papers on the curricular analytics framework, and extracting metrics discussed in the citing literature. In this section, we detail the procedures for conducting the scoping review and for data analysis.

Data collection

Because terms like "curricular analytics" and "curricular analysis" can have multiple meanings and yield numerous unrelated manuscripts, we opted for a more direct search approach. We collected articles that cited two foundational works on curricular complexity where the term is currently conceptualized (Heileman et al., 2018; Slim, 2016). We picked these works for their high citation counts and focus on the use of curricular analytics rather than on application in specific settings. Based on the citation patterns collected in August 2023, we identified 174 unique studies citing the two works using the forward-citation chaining method with one degree of separation. Next, we examined the eligibility of the initially searched papers based on the inclusion criteria (IC) centering on the purpose of this study, which are listed below: (IC1) The method(s) employed in the paper must be quantitative or mixed methods, as we are only interested in collecting a list of quantitative metrics used to study the curriculum; (IC2) The study must focus on analyzing curricula, as we were not aiming to collect papers on broader learning analytics applications that primarily involve student records; (IC3) The work must use or build upon network analysis, as the foundational framework for curricular analytics is rooted in such concepts; (IC4) The article must be written in English; and (IC5) The context of the study must include engineering, given our focus on applications in engineering education.

After addressing duplicates, the first and second authors applied the five ICs in the screening stage at the abstract and title level, removing 83 papers from the sample. In the appraisal stage, the same two authors evaluated the remaining 91 papers using the full texts. This step excluded another 26 papers from the sample for violating one or more inclusion criteria. A common reason for a paper being excluded was that it cited only curricular analytics, whereas the paper described something entirely different, violating IC3. The final corpus comprised 65 papers, including journal and conference articles, literature reviews, and dissertations. Figure 1 illustrates the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart (Page et al., 2021) that summarizes the search process. The first and second authors met weekly to discuss the procedures for the scoping review and to ensure consistent application of the inclusion criteria.

Figure 1

Figure 1. PRISMA flowchart of this scoping literature review of curriculum analytics metrics. PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.

Analysis

Because the focus of this work was on the metrics researchers used, we extracted the pertinent information from the included papers, identifying the type of metric and its operationalization. Because some metrics could be defined differently or duplicates might emerge, we grouped metrics measuring similar constructs. Metrics were categorized as related to structural and instructional complexity based on Heileman et al. (2018). Moreover, to classify the level of analysis, we examined how each metric was applied using three levels: Student, course, and curriculum. Student-level metrics captured students' performance within or across courses. Course-level metrics involved properties of courses as units in the network, meaning the value assigned was tied to the specific vertex in the curricular network associated with each course. Finally, curriculum-level metrics assigned a value to the entire program, establishing a global measurement for the network.

Limitations

This scoping review is primarily limited by the search strategy we used to identify relevant manuscripts. The conceptions of curricular analytics (or analysis) are expansive, and this review captures one perspective - drawing heavily from curricular analytics as defined by Heileman et al. (2018). As such, our results are not inclusive of other definitions of curricular analytics. Our decision to use a forward-chaining approach was driven by a pragmatic need to filter out irrelevant articles that did not align with our intended definition of curricular analytics. Still, our choice of articles as the basis for the search heavily influences the eligible manuscripts that end up in our sample. As such, independent developments of similar network-based approaches to curricular analytics would not have been captured. Finally, because this work was undertaken as a scoping review, the quality of the manuscripts was not appraised, as is typically the case. Although focusing on the content of the manuscripts has little impact on the results, we cannot draw conclusions about each manuscript's implementation of curricular analytics, and doing so will require more careful attention.

RESULTS

The findings section is organized into three main parts. To begin, we will provide an overview of the necessary network analysis concepts that the metrics build upon in their operationalization. We will discuss structural and instructional metrics separately afterward.

Formally, we can describe a curriculum C as a directed graph, Gc= (V, E), where V denotes the set of courses (vertices), and E represents the prerequisite and corequisite relationships (edges) between them. To help ground our discussion, Figure 2 visually represents a sample curriculum structure. For example, in Figure 2, course B serves as a prerequisite for courses C, D, and E.

Figure 2

Figure 2. Example curricular network. A-L are arbitrary course names.

While defining the various metrics in this review, certain terms will appear more frequently than others. In Table 1, we present these common terms and their definitions for reference.

Table 1: Notation and conventions of curriculum analytics metrics for reference
Notation Construct / Variable
A vertex (i.e., course) i in term j.
The degree of a vertex is the number of edges connected to the vertex. A- superscript on deg refers to the in-degree (directed edges pointed to the vertex) and a + superscript refers to the out-degree (directed edges pointed away from the vertex)
V The set of vertices (i.e., courses) in the network
E The set of edges (i.e., pre- and corequisites) connecting the vertices
The number of terms in the plan of study
The expected time-to-degree (i.e., the final term before graduating)
The path, p, of vertices from to

Next, Table 2 summarizes our categorization of the extracted metrics into three levels: Student-level metrics, course-level, and curriculum-level, along with their definitions. Student-level metrics assign values to individual students to evaluate how they navigate the curriculum, including their performance, course selection patterns, and progression through the program. In contrast, course-level metrics assign values to individual courses to analyze their structural roles and relationships within curricular networks. Curriculum-level metrics assess the overall structure and connectivity of the curriculum using a single score or set of scores. A metric was classified as structural if it used a network component in its calculation.

Table 2: Categorization of metrics in curricular analysis
Level Definition Structural metrics Instructional metrics
Student Metrics that assign a value to individual students, evaluating how students navigate the curriculum, including their performance, course selection patterns, and progression through the program. N/A Pass rate / DFW rate (i.e., grades of D or F and withdrawals)
Grade anomaly
Course Metrics that assign a value to individual courses, analyzing their structural roles and relationships within curricular networks. Course (in/out) degree
Blocking factor
Delay factor
Deferment factor
Bottleneck course
Inflexibility factor
(Degree, betweenness, edge) centrality
Cruciality
Pass-through effect
Course grade anomaly
Course toxicity
Curriculum Metrics that assess the overall structure and connectivity of the curriculum using a single score or set of scores. Structural complexity
Curriculum rigidity
Degrees of freedom
Transfer delay factor
Complexity explained
Curriculum stringency
Conformity score
Student mobility turbulence
Curriculum stringency
DFW, grades of D and F in addition to withdrawals.

On the other hand, a metric was classified as instructional if it attempted to quantify instructional quality in some way—this included pass rates, which the framework initially associated with instructional complexity. Note that some metrics, such as curricular stringency, appear in both structural and instructional complexity categories. Such a repeat is a result of a calculation that involves both network-related quantities and instructional factors, as established by Heileman et al. (2018).

Structural complexity

In this section, we will build up from the simplest structural complexity metrics found in our sample, building up to the more involved calculations. To begin, we'll consider the basic course-level metrics.

Course-level metrics - classifications of courses using vertex degree

Course-level metrics examine individual courses and their structural roles within a curriculum network. These metrics capture how courses function as prerequisites, influence student progression, and contribute to overall curricular complexity. As such, authors have identified specific configurations of prerequisite structures at the course level; for example, an isolated course has no prerequisites and does not serve as a prerequisite for other courses (e.g., A in Figure 2; Loge, 2022). An isolated course is defined as a vertex with a degree of zero {deg(vij) = 0)}, where the degree is the number of edges connected to the vertex.

For non-isolated courses, we could examine how many courses are direct prerequisites to the course in question using the course in-degree {deg(vij)}. Course in-degree is the number of directional edges pointing to a course, or the number of prerequisite courses of course vij. For instance, in Figure 2, B is the only course pointing to D, so course D's in-degree is one. The course out-degree {deg+(vij)} is the number of directional edges leaving the vertex; in other words, the out-degree is the number of courses specifying the given course as the prerequisite. For example, course B has an out-degree of 3, pointing to courses C, D, and E. By combining the in-degree and out-degree, we can further assess a course's overall connectivity within the curriculum network. Degree centrality captures this idea by measuring the total number of direct connections a course has, reflecting its immediate influence and structural importance in the curriculum (Loge, 2022). It is expressed as:

For instance, in this curriculum network, course D has a degree centrality of 3 because it has an in-degree of 1 and an out-degree of 2. Compared to the other courses shown in Figure 2, D has a high degree centrality, indicating it serves as a gateway to advanced courses.

Other metrics for courses possessing specific prerequisite relationships include linked courses and target courses. A linked course (Loge, 2022) is defined as a course with at least one prerequisite or that is a prerequisite for at least one course (i.e., deg(vij) ≥ 1 or deg+(vij) ≥ 1). A target course is simply a course with a prerequisite (Loge, 2022); however, this language is not standardized in the sample, with Davis et al. (2020) using the term more generally to refer to a course a student wants to complete with a specific grade as part of a course recommendation algorithm. However, more intricate relationships can be captured beyond these general classifications.

Course level metrics - defining "important" courses in the curriculum

Using the concepts of in-degree and out-degree, the bottleneck course metric can be used to identify gateway courses students must complete to access substantial portions of their remaining program (Molontay et al., 2020). The formula to determine if a course vij is a bottleneck uses the logical operator or (V) and three inequalities.

The values a, b, and c represent the thresholds for a course to be considered a bottleneck. Specifically, a represents the minimum number of prerequisite courses (in-degree); b refers to the minimum number of courses for which a course acts as a prerequisite (out-degree); and c indicates the total degree (sum of in-degree and out-degree) required for a course to qualify as a bottleneck. If any of the inequalities hold, the course is considered a bottleneck. Unlike the degree centrality metric, the bottleneck course metric enables us to distinguish between different types of prerequisite arrangements using threshold values. For example, consider B and D in Figure 2. Both courses have a degree centrality of 3. However, if we require that a and b be 3 and c be 5, (so the course must have at least three prerequisites, be the prerequisite for at least three courses, or be connected to five or more courses), B would be considered a bottleneck course, whereas D would not. There is scant advice on what the values of a, b, or c should be, with the exception of Wigdahl et al. (2014) using (3, 3, 5) as their threshold values.

The bottleneck classification is inherently parameter-dependent. The set of courses labeled as bottlenecks can change substantially depending on the chosen thresholds: (a, b, c). This creates a comparability issue when bottleneck results are contrasted across programs, institutions, or time, because observed differences may reflect threshold choice as much as underlying curricular structure. In the same curriculum, a threshold such as (3, 3, 5) may flag only a small number of highly connected "hub" courses, whereas (2, 2, 4) could classify a much larger set of gateway courses as bottlenecks. Therefore, for comparative analyses, researchers should (1) report (a, b, c) explicitly, (2) justify the choice using a clear rationale (e.g., prior work, curriculum size/density), and (3) consider a sensitivity check showing whether conclusions hold across a range of thresholds.

Continuing with the idea of prerequisite relationships, we have the blocking factor (bc) next, which is the number of courses a student cannot take if a prerequisite is failed (Heileman et al., 2018; Slim, 2016).

where I is the indicator function signifying if there is a prerequisite chain linking two arbitrary courses: v*a and v*b.

Unlike the bottleneck course metric, which only considers the requisite relationships the course is in proximity to, the blocking factor considers dependencies throughout the network. In the example shown in Figure 2, if Course B is failed, it directly blocks Courses C, D, and E. Additionally, Courses F, G, H, and I are indirectly blocked, bringing the total to seven blocked courses for Course B. The blocking factor directly measures the extent to which a particular course would hinder student progression in the degree program (Heileman et al., 2018).

The reachability factor (rc) is the complement of the blocking factor. While the blocking factor measures how many courses become inaccessible if a specific course is failed, reachability instead quantifies the number of prerequisite courses that must be completed before a given course can be taken. To calculate the reachability of a course, we count the number of courses that must be completed to enroll in the course we're considering (Heileman et al., 2018):

For example, the reachability factor for Course I is 3 because one must take Courses B, D, and G to ultimately enroll in Course I.

In addition to the relationships between courses, timing within the curriculum network is crucial in determining how course failures affect student progress. The deferment factor quantifies the impact of repeated failures by measuring the threshold before additional semesters are required (Molontay et al., 2020).

where k represents the maximum number of allowable failures before exceeding the expected program completion time. A higher deferment factor indicates low flexibility, whereas a lower value suggests that failing a course is less likely to lead to an extended time-to-degree. For example, assuming a four-term completion time, B, D, G, and I have deferment factors of 1. This means that students cannot fail those courses without also extending their time-to-degree. On the other hand, A has no requisite relationships. Therefore, a student could retake Course A three times, corresponding to Dc(A) = 0.25 before that specific requirement begins impacting their graduation timing. By incorporating graduation timing alongside the structural properties of the curriculum, the model quantifies how prerequisite relationships influence a student's expected completion time. Specifically, the deferment factor identifies courses where failures or delays create cascading effects, highlighting critical courses that disproportionately impact student progression and overall time-to-degree (Molontay et al., 2020).

At its core, the deferment factor concerns the overall structure of prerequisite sequences. The most closely related metric is the delay factor (dc), which represents the longest prerequisite chain that includes the course. Heileman et al. (2018) provide the following definition.

For example, as shown in Figure 2, the longest course sequence chain is 4 for courses B, D, G, and I, since the sequence {B→D→G→I} exists. Therefore, each of these courses has a delay factor of 4. A course with a high delay factor represents a high-stakes course for timely graduation (Heileman et al., 2018). While the blocking factor measures how many courses become inaccessible if a course is failed, the delay factor captures how prerequisite chains can impact a student's time to degree completion.

Another aspect of the original conceptualization of curricular analytics, as noted with the deferment factor, is the influence of timing on curricular complexity. A metric that incorporates timing directly into its calculation is the inflexibility factor If(vij). This metric aims to quantify the impact of courses with limited offerings, such as fall-only courses, by examining the effect on students' time to degree when course completion is delayed (Reeping & Grote, 2022). The formula for the inflexibility factor builds on the delay factor by applying two penalty terms: one for the number of terms that would ultimately extend the students' time to degree, and one for the number of terms the course was shifted.

Here, j denotes the term the course was originally scheduled in, p(vij) is the number of terms that the course extends beyond the expected time to degree once moved (if applicable), and tw(vij) is the number of semesters the student must wait to attempt the course again. For example, consider course G in Figure 2, which has a delay factor of 4 (d(G) = 4) that is positioned in term 3 of the curriculum (j = 3). Assume the course is offered only in the Fall and the expected graduation term is Term 4. When we move it to the next possible semester, the student's time to degree is extended by 2 semesters beyond the expected graduation time (p(vij) = 2). Because the course was Fall only and requires waiting two terms for the next offering, then tw(vij) = 2. Taken together, the inflexibility factor would be If(G) = 4 (3 + 2 + 2) = 28. This score indicates that the course imposes substantial timing constraints, which can impact a student's ability to complete the program on time. The higher the inflexibility factor, the more critical the course becomes in delaying student progression.

Course level metrics - combining other factors to form "importance" scores

Having examined how blocking, reachability, deferment, and delay factors reveal the structural constraints within a curriculum, we now turn to a metric that identifies the most critical courses within it - cruciality. A course's cruciality c(vij) is the sum of the course's blocking and delay factors as follows (Heileman et al., 2018; Slim, 2016).

For example, course B's cruciality is 11, because (B) = bc(B) + dc(B) = 7 + 4 = 11. Course cruciality identifies courses that are critical to timely graduation (Slim et al., 2014), but authors have extended this concept to incorporate timing in the calculation. To accomplish this, weights are assigned to terms to penalize crucial courses scheduled later in a curriculum. Term weighted course cruciality is defined as the product of course cruciality c(vij) and the term j in which the course is taken.

Term-weighted cruciality can be used to evaluate the effectiveness of course placement within the curriculum (DeRocchis et al., 2021). Higher values suggested that crucial courses are being scheduled too late, potentially delaying student progression. In contrast, lower values indicate better alignment, with key courses appearing earlier in the sequence, supporting smoother advancement toward graduation. These term-weighted values can be compared with the unweighted crucialities to uncover unexpected bottlenecks based on their placement in the program.

Course level metrics - other perspectives on "importance"

Another perspective quantifying the importance of a course is centrality. Centrality is the number of paths that include the course, then the centrality of vij, denoted q(vij), is defined as follows (Heileman et al., 2018).

Here, pvi = {p1, p2, …, pn} is the set of all paths in the curriculum graph that include vij, start at a source node (i.e., a course with no prerequisites) and end at a sink node (i.e., a course with no successors). Unlike the delay factor, which focuses on the longest prerequisite chain that includes the course, centrality captures all prerequisite chains that do. Under this operationalization of importance, a course is more important when it is an element of multiple prerequisite chains. Note that courses at the beginning or end of a pathway have zero centrality by definition. For example, in Figure 2, Course D has a high centrality score because it connects foundational courses, such as B, to advanced courses, including F, G, and I.

Although centrality captures how frequently a course appears on any long path in the curriculum, betweenness centrality focuses specifically on a course's role in connecting other courses via shortest paths. It quantifies how often a course serves as a bridge between two different courses (s and t) in the shortest possible sequence. Formally, it is expressed as follows.

where σst is the total number of shortest paths between courses s and t (Molontay et al., 2020). The other term, σst(vij), is the number of those shortest paths that pass through the course vij between courses s and t. For example, in Figure 2, Course D has a high betweenness centrality because it lies on the shortest paths connecting B to G, F, and I. Both centrality and betweenness centrality focus on the importance of specific courses; however, there is another type of betweenness centrality (Davis et al., 2020) that extends the analysis from courses to requisite relationships in the network. The formula for betweenness edge centrality has nearly the same form as σst(e). The only difference is now σst(e), represents the number of shortest paths between vertices s and t that include the edge e.

This is one of the few metrics that focuses on assigning values to the prerequisite relationships themselves, instead of using those connections to assign values to the courses. In our example curriculum in Figure 2, (D, G) has a high value for betweenness edge centrality.

Curriculum-level metrics - summarizing complexity with single scores

Next, we will focus on curriculum-level metrics, which provide insight into the complexity of an entire program or specific subsets of a curriculum. First, we can readily describe the structural complexity - also referred to as program complexity - of the curriculum as the sum of all cruciality values in the network. This metric offers a summary score of the interdependencies within a curriculum (Heileman et al., 2018; Slim, 2016), The structural complexity, given by αc, can be used to compare different programs. In Figure 2, the overall structural complexity is 52.

Like the term-weighting that has been explored with cruciality, we can sum these weighted crucialities to form the term-weighted curricular complexitywc; DeRocchis et al., 2021). Note that, unlike the structural complexity, which has a unique value for each curriculum, introducing term weights leads to a result that is dependent on how courses are arranged. For example, when adding term weights, our term-weighted structural complexity becomes 109. Moving J's prerequisite sequence up by one term increases the term-weighted structural complexity to 121 - an 11% increase.

Structural complexity is not the only metric for summarizing the curriculum's complexity. For example, curriculum rigidity (Cr) measures the density of prerequisite and corequisite relationships within a curriculum, calculated as the ratio of the total number of prerequisite and corequisite relationships to the total number of courses.

Curriculum rigidity is comparatively easier to calculate than structural complexity, with a higher score (i.e., > 1) indicating a more constrained curriculum in which courses depend more heavily on one another. For example, if our example curriculum in Figure 2 has 12 courses (|V|= 12) and 9 prerequisite/corequisite relationships between them (|E|= 9), its rigidity would be 0.75. So, it is generally considered somewhat flexible.

When formulating what it means for a curriculum to be complex in terms of rigidity, the focus is on inflexibility, frequently measured by the density of prerequisites. In other words, the metrics have consistently conveyed the sentiment that a higher number is a negative indicator. On the other hand, the degrees of freedom (zc) metric approaches curricular complexity from a different perspective: Flexibility. The degrees of freedom metric is a count of the number of valid ways students can arrange their course schedules (Reeping & Grote, 2022). Heileman et al. (2018) do not provide a formula for calculating the degrees of freedom, and it is perhaps not possible to express this metric in a closed form, considering some arrangements are not feasible (e.g., too many or too few credits). The degree-of-freedom metric is closely related to the concept of weakly connected components in the curriculum graph, which is the maximal subgraph in which any two vertices can be reached by an edge (ignoring direction). These weakly connected components form subgroupings of requirements, which can help isolate specific design patterns in the curriculum. A curriculum with more weakly connected components in more scattered clusters tends to have greater scheduling flexibility. In contrast, fewer, larger components indicate a more rigid structure with limited course arrangement options (Heileman et al., 2018). Calculating degrees of freedom using weakly connected components is possible, but it likely requires more computational resources to simulate all possible arrangements.

Alternatively, we can center the degrees of freedom on the number of terms a course can be moved in the curriculum and sum those values to obtain a score on the curriculum's flexibility. Thus, the degree of freedom for a curriculum can be defined as (Reeping & Grote, 2021):

In this case, nt is the total number of terms available for scheduling that do not exceed the students' expected time to degree te, d'(vij) is the delay factor ignoring corequisite relationships, and u(vij) represents the number of ineligible terms where a course is not offered. If a course is only offered in specific terms (e.g., Fall only), u(vij) captures how many semesters the course cannot be shifted within a student's schedule. For our example curriculum, our zc is 11, after accounting for the number of moves all courses can make. In the usual operationalization of curricular analytics, u(vij) is zero for all vertices because courses are assumed to be available at all times. By identifying such constraints, the degree of freedom helps assess structural bottlenecks that limit flexibility in curriculum planning, enabling curricula to be restructured with multiple valid scheduling arrangements.

To further develop the conceptualization of curricular complexity, efforts have been made to introduce metrics that are sensitive to specific student populations. The most pronounced addition is the integration of transfer-specific metrics to provide deeper insights into the obstacles these students encounter (Reeping & Grote, 2022). Although curricular complexity metrics have been effectively used to analyze graduation rates for First-Time-In-College (FTIC) students (Heileman et al., 2018; Nash et al., 2021), they do not necessarily capture the unique challenges faced by transfer students (Grote et al., 2021). Transfer students encounter additional complexities, including credit loss, course timing discrepancies, and structural misalignments between sending and receiving institutions. Researchers have sought to address these gaps by enhancing existing structural complexity metrics (Reeping et al., 2021). One transfer-specific metric is the transfer delay factor (Td), which captures rigid prerequisite structures that affect a transfer student's time to graduation by extending it due to how courses are arranged. Unlike other measures that do not incorporate graduation timing, the transfer delay factor penalizes sequencing that extends a student's expected time to graduation. The transfer delay factor is defined as the sum of delay factors for all courses that a student must complete beyond their expected time to degree (te):

For example, consider a transfer student who enters a program structured as shown in Figure 2. Suppose the curriculum was intended to be completed in three terms (te = 3), but the course sequencing makes it impossible to do so. Now, course I is extending the student's time to degree by one term. The delay factor of I is 4, since the longest prerequisite chain it belongs to is four courses long. It is the only course that extends the time to degree, so the sum of the delay factors is also 4. This metric allows institutions to identify sequences that systematically delay students, particularly transfers, and informs more flexible curricular planning.

Curriculum-level metrics - extracting specific curricular design patterns

Beyond examining specific courses and the entire curriculum, there are situations in which one might want to calculate any of the metrics reviewed thus far for a subset of courses. These subsets are typically specific curricular design patterns (Heileman et al., 2017), such as courses connected to the Calculus sequence, introductory engineering courses, or the mechanics sequence (i.e., Statics, Dynamics, and Strength of Materials; Padhye et al., 2024). For these design patterns, we can calculate the subcomplexity, also referred to as pattern complexity, which is the structural complexity of a subset of the curriculum network (i.e., a subgraph). However, more can be done with these subnetworks.

For example, the underlying process for calculating the transfer delay factor can be revised to quantify the proportion of complexity associated with a student's extended time to degree, referred to as complexity explained. To accomplish this, we would calculate the transfer delay subcomplexity (αtd; Reeping & Grote, 2022). This calculation involves forming a subgraph of courses that are connected directly or indirectly to the courses extending the students' time to degree, which is encapsulated in the Is indicator function, and summing up the crucialities of those courses - which could be weighted or unweighted - to get the complexity of the network.

Here, c(vij) is the cruciality of the course, and Is(vij, v*j > te) is the indicator function identifying courses that are connected to courses that extend the student's time to degree, te. For example, consider a transfer student entering a program structured as shown in Figure 2, with an expected graduation time of 3 semesters (i.e., te = 3). By design, the student cannot complete the program because course I is extending completion beyond the expected two-term mark. If we assign cruciality scores, we find that c(B) = 11, c(D) = 7, c(G) = 5, and c(I) = 4. So, αtd= 11 + 7 + 5 + 4 = 27.

Once we calculate the transfer delay subcomplexity, we can divide it into the overall structural complexity to calculate the complexity explained, αe, which provides a quantitative way to describe the complexity attributable to that design pattern.

In our example, the explained complexity is 27/52, or 52%. This means that 52% of the overall complexity is associated with completion delays. While initially used to assess course-level delays, it can also be applied more broadly to evaluate how specific curricular design patterns contribute to program-wide complexity (Reeping & Grote, 2022).

Instructional complexity

Compared to the structural complexity component of curricular analytics, the instructional complexity has received much less attention. Across the reviewed papers, we identified a small subset of metrics that capture the curriculum's latent, non-structural features. In the original framework, instructional complexity was entirely proxied by individual-course pass rates to capture other aspects of the curriculum, including student support structures, course difficulty, and instructional quality (Heileman et al., 2018). Most papers did not expand on this idea and continued to use previous grades, grades of D and F in addition to withdrawals (DFW) rates, or pass rates - all of which refer to the same underlying concept.

The use of pass rates to measure instructional complexity primarily serves to simulate student progression through a given program. For example, Molontay et al. (2020) used discrete-event simulation to model the expected graduation time based on both the curriculum's topological structure and course-level completion probabilities. The model uses the partial derivative of a function f(p1, p2, …, pn) representing the expected graduation time when one of the pass rates is changed, pi.

The value of Di must be estimated empirically or, more realistically, through simulation. This metric complements traditional structural metrics by incorporating student performance, allowing us to pinpoint bottlenecks to student progress by simulating their effect on student flow through the curriculum.

An extension of the idea of pass rates that does not require simulations is student mobility turbulence (SMT; Basavaraj, 2020). This metric captures volatility in student progression by analyzing patterns of program withdrawals and major changes, which can indicate structural barriers or curriculum inefficiencies. The SMT metric is formulated as follows.

where drop rate (DR) denotes the number of students who withdrew from the program at the end of the ith term; changed major (CM) represents the number of students who changed their major at the end of the ith term (assuming into the program); C1 and C2 are weighting coefficients; N is the total number of students who withdrew from the program or changed majors (however, it seems more likely it should refer to the total number of enrolled students if both DR and CM are students leaving). The term, n, is the number of terms under analysis (typically 8). The authors suggest that the weights C1 and C2 are determined based on empirical observations of how each factor contributes to student turbulence, with typical values set as C1 = 1 and C2 = 0.5 in prior applications. For example, if we consider the SMT using a collection of programs, if a cohort of 100 engineering students has 50 students leaving the engineering programs (DR) and 10 major changes among them (CM) over three terms, with C1 = 1 and C2 = 0.5, their SMT would be (1 × 50 + 0.5 × 10) /100 = 0.55, suggesting moderate turbulence in student progression. Using the proposed definition provided in (Basavaraj, 2020), we find that SMT would be (1 × 50 + 0.5 × 10) /60 = 0.92 - a significantly higher turbulence value. While no strict thresholds are defined, lower SMT values typically indicate smoother, more stable progress toward degree completion. In contrast, higher values reflect increased academic instability, which can delay graduation. For example, if Program A has a higher SMT than Program B, it suggests that students in Program A experience more disruptions and are progressing less efficiently toward their degrees (Basavaraj, 2020).

There is a smattering of metrics that can further contextualize the instructional complexity of the curriculum beyond pass rates. For example, another metric defined in the literature, grade anomaly (GA; Waller, 2022), is a measure of students' performance in specific courses relative to their performance in their other courses. It is formulated as:

Here, GAij is the grade anomaly of student i in course j, which is the difference between the student's grade from the focal course, Gij, and their overall GPA, GPAij. Note that grade anomaly is at the individual student level. At the course level, course grade anomaly (CGA) is the mean discrepancy between the students' GPAs and their grades for a given course:

where N is the number of students.

Expanding on the concept of CGA is the course toxicity metric (Thompson-Arjona, 2019). Course toxicity quantifies the impact of taking particular courses together on student success. It is calculated by comparing the probability of passing a course individually with that of passing it when taken alongside another course. The toxicity score T(vij, vkj) of taking courses vij and vkj together is defined as:

where Xij is defined as the event that a student passes course vij on the first attempt. This metric captures how sensitive the expected graduation time is to changes in the success rate of course i, allowing institutions to prioritize interventions in courses with high pass-through impacts. The model complements traditional structural metrics by incorporating dynamic, data-driven estimates of course completion and simulating their effect on student flow through the curriculum.

Moving beyond measures of student progression, the literature has also examined how closely students follow recommended course pathways. For example, the conformity score (CS; Backenköhler et al., 2018) quantifies the alignment between the courses students take and those recommended by academic advisors or degree plans. Formally, this metric is defined as follows.

where Cseli,t represents the set of courses the student i selects in term t, Creci,t represents the set of courses recommended for student i in term t; k is the number of recommended courses to consider (typically 4-6), τ is the set of all terms, and st is the set of students enrolled in term t. A higher CS suggests stronger adherence to the recommended pathway. Personal preferences, scheduling conflicts, or course availability can lead to lower scores when students do not follow the course recommendations. For example, consider a scenario in which three courses are recommended by advisors (k = 3) :{Math101, Physics101, Chemistry101}. A student then enrolls in {Math101, Physics101, Biology101}, where two of the three courses are the same. The CS for this term would be 1- (3 - 2)/3 = 0.67, indicating moderate adherence to the recommendations. This metric can help evaluate the effectiveness of advising strategies and uncover patterns in which students are navigating the curriculum differently than intended.

Blending structural and instructional features

In our sample, we found one metric that combined instructional and structural factors, curriculum stringency (Basavaraj, 2020). This metric uses the prerequisite relationships in the form of a sum of proportions relative to the number of vertices, evoking a similar idea to curricular rigidity, summed with a factor called DMpsf.

The DMpsf factor in the equation is a "difficulty metric consider[ing] the average number of times a student had taken the exam (Avg_taken), the average number of times a student had failed (Avg_failed), and the average grade of the exam (Avg_grade)" (Basavaraj, 2020, p. 38). In this case, "exam" refers to program-specific exams outside the context of specific courses.

Curriculum stringency combines the structural features of the network with instructional features, such as student performance on non-course-related exams. Though for many programs, such features do not necessarily apply, so DMpsf = 0, and the metric becomes entirely structural.

DISCUSSION

In our review of metrics used to quantify curricular complexity, we found a wide range of ideas proposed by a growing number of authors in this field. Here, we will discuss two main observations we drew from the literature and close with ideas of how readers can implement these metrics in their own work, whether for research or for continuous improvement in their respective programs.

The literature has focused on structural metrics, with no direct measures of instructional complexity

As shown in Table 2, the number of structural metrics significantly exceeds the number of instructional metrics, with the majority focusing on course-level information. This is perhaps not entirely surprising at face value, given the data requirements for measuring instructional factors compared to those for structural factors. To calculate something related to the actual curriculum's structural features, all one needs is publicly available curricular information. However, instructional features can be much more challenging to quantify, as Heileman et al. (2018) readily admit, because instructional complexity primarily concerns latent constructs such as instructional quality and students' feelings of belongingness. Collecting instructional information about each individual course would be daunting, if not practically infeasible. Accordingly, in the original curricular analytics framework, instructional complexity is proxied by individual-course pass rates. However, this presents an operationalization issue, considering instructional complexity is meant to capture "the manner in which courses in the curriculum are taught and supported" (Heileman et al., 2018, p. 6), for which pass rates are a rather indirect measure. Despite this measurement gap, none of the metrics we categorized as instructional complexity addresses the underlying construct as intended.

Most instructional complexity metrics are related to student performance in some way, whether through pass rates, DFW rates, or the grade anomaly metric. One metric measured performance by proxy, student mobility turbulence, which is a proportion that is calculated from a linear combination of dropout rates and major changes - a curriculum-level summary of student performance (Basavaraj, 2020). The only metric that differed from an exclusive focus on student performance was the conformity score. The conformity score, instead, spotlights students' choices in how they navigate the curriculum (Backenköhler et al., 2018). This dearth of true-to-form instructional metrics invites future work to conceptualize different ways to characterize the instructional complexity of curricula.

Metrics proposed in the literature often are built on similar computations, but have distinct foci

Central to the concept of curricular analytics is quantifying the curriculum's complexity. Accordingly, the metrics proposed within the literature form a constellation of different aspects of complexity that we can readily assign values to in some meaningful way. Within this diverse set of metrics, we found that each could be readily mapped to distinct categories that capture its level and purpose. Within these categories, a question we can ask is the extent to which the metrics correlate with one another. In other words, are the constructs measuring the same underlying construct, or perhaps a significant portion of it? The initial evidence suggests the answer is most likely yes. Focusing on structural complexity, Heileman et al. (2018) initially defined an aggregate measure of a program's complexity. They found that the delay and blocking factors were sufficient to characterize the program complexity, even when considering the centrality, degrees of freedom, and reachability metrics.

At their core, most structural complexity metrics (i.e., bottlenecks, blocking, delay, different versions of centrality, cruciality, reachability, rigidity, inflexibility, transfer delay, and structural complexity itself) are calculated using edge-based information in the network. As the number of requirements increases in terms of prerequisites and corequisites, these values will all tend to increase proportionately. Other metrics, such as the degrees-of-freedom and the deferment factor, do not directly derive their values from counting the number of prerequisites but are nonetheless driven by the same relationship, except that degrees-of-freedom is one of the few metrics where larger values are desirable.

To illustrate, consider the deferment factor. On its surface, it seems distinct from edge-focused metrics because it is one of the few that incorporate timing. However, if we restrict the delay factor to only consider prerequisite relationships, which we'll denote by dc'(vij), we can find a closed-form expression for Dc(vij) in terms of dc'(vij). Because the value of k determines the deferment factor, the number of terms a student can fail a course before extending their time to degree, we need to calculate the number of terms the student has available to them past the term the course is offered (i.e., tej). Then, we need to know how embedded the course is in the prerequisite structure - or more specifically, how long the prerequisite chain is that the course belongs to - the delay factor, ignoring corequisites, dc'(vij). Then, k = tejdc'(vij) + 1. However, we need to account for the possibility that k can be negative when the course is later in the curriculum and the delay factor is high, so we should default k to 0 in such cases. That would mean k = max{0, tejdc'(vij) + 1}. Substituting this value into the deferment factor equation yields the deferment factor in terms of our modified delay factor:

This quick derivation is not meant to illustrate that the metrics are inherently redundant—quite the contrary. Although the deferment factor can be calculated using a modified version of the delay factor, it offers a different perspective on a course's role in the program. Similarly, the reachability factor and blocking factor conceptually build on the idea of access to a course; more specifically, they ask how many courses a student is blocked from enrolling in and how many later offerings a course blocks, respectively. Early courses, such as Calculus, tend to have high blocking factors but have functionally zero reachability scores when the data is limited to the courses required for the student to earn their degree. On the other hand, there might be a course later in the program that blocks nothing because it is the final course in a sequence that depends on several previous courses. The key here is to ensure that the researcher's goals align with the metrics being used, which we will discuss next.

Using the metrics in practice

Of the metrics reviewed in this paper, we contend that researchers can combine them in various ways for a range of practical and research purposes. Although we have discussed structural and instructional complexity separately for the majority of this manuscript, they are inextricably linked. After all, structural complexity is an issue only if instructional complexity is as well. A course can be identified as highly critical based on various metrics (e.g., bottleneck course, centrality, cruciality), but this classification is moot if the pass rate is sufficiently high. Likewise, we might have an isolated course in the curriculum or one that is part of a comparatively small curricular design pattern, so it would not be singled out by any of the metrics reviewed in this paper. However, this course could still have a dismal pass rate and serve as a barrier to degree completion. Therefore, it is essential to avoid placing excessive emphasis on a single metric, especially at the curriculum level.

In fact, surmounting the data volume issues related to instructional complexity could be accomplished by leveraging structural complexity information and pass rates as a means of screening courses to collect instructional complexity data for curriculum development and student support improvement. If a course is highly crucial based on the metrics examined and exhibits low pass rates, then digging deeper into what is driving those low pass rates could be a strategic way to allocate resources. Additional information could be incorporated at the course level to denote modality (e.g., in-person, virtual asynchronous, hyflex), pedagogical strategies used by the instructor, peer tutoring support, and assessment structure. However, the most useful context is likely to be gained through qualitative methods, such as observations and interviews with current and past students and instructors.

To provide actionable guidance to the engineering education community, Table 3 presents a non-exhaustive list of ideas that might prompt a researcher or practitioner to employ the curricular analytics framework. We include a mix of issues that may be relevant to a researcher seeking to use the metrics as correlates with other variables of concern, as well as those more suitable for a practitioner aiming to solve day-to-day curriculum-related problems. We have created an R package to facilitate these analyses (Reeping, 2026) and a data package to apply these ideas to real curricula (Reeping et al., 2026).

Table 3: Possible analytical goals and appropriate metrics
Analytical goal Structural metrics Instructional metrics
(1) Compare or benchmark programs Structural complexity, curricular rigidity Student mobility turbulence
(2) Locate gatekeeper or bottleneck courses Centrality, cruciality, bottleneck Pass rates, course grade anomaly, pass-through effect
(3) Evaluate the scheduling of courses Inflexibility factor, deferment factor, term-weighted cruciality
(4) Determine courses impacting transfer student progress Inflexibility factor, complexity explained, transfer delay factor Pass-through effect
(5) Assess the extent to which students follow the curriculum as prescribed Conformity score
(6) Balance semesters with manageable course loads Centrality, cruciality, bottleneck Course toxicity, course grade anomaly
(7) Simulate how students flow through a curriculum to estimate completion rates Structural complexity Pass rates
(8) Show continuous improvement in program design and outcomes Structural complexity, inflexibility factor, transfer delay factor Student mobility turbulence

Future directions

Based on our synthesis here, it is clear that the current literature relies heavily on pass rates, DFW rates, and grade-based proxies for instructional complexity. Although these metrics are easily scalable, they are indirect measures of the construct originally described in curricular analytics (i.e., how courses are taught and supported). This gap is a nontrivial mismatch in the original definition of the instructional complexity construct. Future work should develop instructional complexity measures that draw more directly on instructional environments and support structures.

Promising data sources include classroom observation protocols, syllabi and assessment structures (e.g., grading composition, workload, high-stakes testing), course evaluations or student surveys of teaching effectiveness and support, and administrative indicators such as tutoring/supplemental instruction use, office-hour attendance, course capacity constraints (waitlists/seat availability), and course offering frequency. Methodologically, researchers could use natural language processing (NLP) on syllabi and catalog descriptions to quantify instructional alignment and cognitive demand and apply multilevel or latent-variable models to combine multiple indicators into more construct-faithful instructional complexity measures (De Silva et al., 2024). Another take on instructional complexity could involve embedding the learning outcomes into the network or analyzing them separately to determine whether the courses appropriately build from low-level to high-level outcomes (Heileman & Zhang, 2024).

CONCLUSION

This scoping literature review synthesized the metrics used in curricular analytics to analyze engineering curricula quantitatively, identifying a comprehensive catalog of 23 unique metrics across different levels of analysis. Of these, 13 operate at the course level, two at the student level, and eight at the curriculum level. The majority focus on structural complexity, while eight address instructional complexity—highlighting an imbalance in how curricula are currently evaluated.

The field of curricular analytics shows promising growth but requires greater methodological diversity. While graph-theoretic and network-analytic approaches have established a strong foundation, most studies focus on prerequisite structures without adequately considering instructional quality, student experiences, or demographic factors. Future research should prioritize expanding instructional complexity measures beyond pass rates to include pedagogical quality and student engagement. Moreover, researchers can consider integrating metrics across the course, curriculum, and student levels to create more comprehensive evaluation models that do not rely too heavily on individual metrics.

By addressing these research gaps and leveraging the full spectrum of metrics identified in this review, engineering education can advance toward more data-informed curriculum design that enhances student success while promoting educational equity. The catalog of metrics presented here provides researchers, practitioners, and administrators with actionable tools to quantitatively evaluate and optimize curricula, thereby better serving all students.

DECLARATION

Acknowledgement

None.

Author Contributions

Nahal Rashedi and David Reeping developed the concept for the manuscript. Nahal Rashedi reviewed the literature, formulated research questions, collected and analyzed the data, and interpreted the results. David Reeping and Siqing Wei assisted with manuscript writing and data interpretation. Each author read and approved the final manuscript.

Source of funding

This material is based upon work supported by the National Science Foundation under Grant Number EEC-2152441. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Ethics approval

This work did not fall under the purview of an Institutional Review Board as it did not involve human subjects.

Informed consent

Not applicable.

Conflicts of interest

The authors declare no competing interests.

Use of large language models, AI and machine learning tools

None.

Data Availability Statement

Data are available upon request.

REFERENCES

  1. Backenköhler, M., Scherzinger, F., Singla, A., & Wolf, V. (2018). Data-driven approach towards a personalized curriculum (EasyChair Preprints). EasyChair. https://doi.org/10.29007/29gh
  2. Basavaraj, P. Utilizing institutional data for curriculum enhancement to improve student success in undergraduate computing programs. University of Central Florida (Thesis). 2020. https://purls.library.ucf.edu/go/DP0023055
  3. Davis, G. M., AbuHashem, A. A., Lang, D., & Stevens, M. L. (2020). Identifying preparatory courses that predict student success in quantitative subjects. Proceedings of the Seventh ACM Conference on Learning@ Scale, 337-340. https://doi.org/10.1145/3386527.3406742
  4. DeRocchis, A. M., Boucheron, L. E., Garcia, M., & Stochaj, S. J. (2021). Curricular complexity of student schedules compared to a canonical degree roadmap. 2021 IEEE Frontiers in Education Conference (FIE), 1-5. https://doi.org/10.1109/FIE49875.2021.9637443
  5. De Silva, L. M. H., Rodríguez-Triana, M. J., Chounta, I. A., & Pishtari, G. (2024). Curriculum analytics in higher education institutions: A systematic literature review. Journal of Computing in Higher Education, 37, 1-47. https://doi.org/10.1007/s12528-024-09410-8
  6. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91-108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
  7. Grote, D. M., Knight, D. B., Lee, W. C., & Watford, B. A. (2021). Navigating the curricular maze: Examining the complexities of articulated pathways for transfer students in engineering. Community College Journal of Research and Practice, 45(11), 779-801. https://doi.org/10.1080/10668926.2020.1798303
  8. Heileman, G., Hickman, M., Slim, A., & Abdallah, C. (2017). Characterizing the complexity of curricular patterns in engineering programs. 2017 ASEE Annual Conference & Exposition, 28029. https://doi.org/10.18260/1-2--28029
  9. Heileman, G. L., Abdallah, C. T., Slim, A., & Hickman, M. (2018). Curricular analytics: A framework for quantifying the impact of curricular reforms and pedagogical innovations. arXiv. Retrieved Mar. 3, 2026, from http://arxiv.org/abs/1811.09676
  10. Heileman, G. L., & Zhang, Y. (2024). Minimizing curricular complexity through backwards design. 2024 ASEE Annual Conference & Exposition, 47779. https://doi.org/10.18260/1-2--47779
  11. Hilliger, I., Aguirre, C., Miranda, C., Celis, S., & Pérez-Sanagustín, M. (2020). Design of a curriculum analytics tool to support continuous improvement processes in higher education. Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, 181-186. https://doi.org/10.1145/3375462.3375489
  12. Hilliger, I., Aguirre, C., Miranda, C., Celis, S., & Pérez-Sanagustín, M. (2022). Lessons learned from designing a curriculum analytics tool for improving student learning and program quality. Journal of Computing in Higher Education, 34(3), 633-657. https://doi.org/10.1007/s12528-022-09315-4
  13. Li, X. V., Rosson, M. B., & Hellar, B. (2023). A synthetic literature review on analytics to support curriculum improvement in higher education. EDULEARN23 Proceedings, 2130-2143. https://doi.org/10.21125/edulearn.2023.0640
  14. Loge, E. (2022). A quantitative assessment and comparison of the undergraduate curriculum prerequisite structures for the universities in the Minnesota State System with particular emphasis on mathematics courses. Minnesota State University, Mankato. Retrieved Mar. 3, 2026, from https://cornerstone.lib.mnsu.edu/etds/1208/
  15. Molontay, R., Horvath, N., Bergmann, J., Szekrenyes, D., & Szabo, M. (2020). Characterizing curriculum prerequisite networks by a student flow approach. IEEE Transactions on Learning Technologies, 13(3), 491-501. https://doi.org/10.1109/TLT.2020.2981331
  16. Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18, 143. https://doi.org/10.1186/s12874-018-0611-x
  17. Nash, J., Boucheron, L. E., & Stochaj, S. J. (2021). A correlative analysis of course grades as related to curricular prerequisite structure and inter-class topic dependencies. 2021 IEEE Frontiers in Education Conference (FIE), 1-5. https://doi.org/10.1109/FIE49875.2021.9637401
  18. Padhye, S., Reeping, D., & Rashedi, N. (2024). Analyzing trends in curricular complexity and extracting common curricular design patterns. 2024 ASEE Annual Conference & Exposition, 46580. https://doi.org/10.18260/1-2--46580
  19. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P., & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
  20. Reeping, D., & Grote, D. (2021). Rethinking the curricular complexity framework for transfer students. 2021 ASEE Virtual Annual Conference Content Access Proceedings, 37680. https://doi.org/10.18260/1-2--37680
  21. Reeping, D., Grote, D. M., & Knight, D. B. (2021). Effects of large-scale programmatic change on electrical and computer engineering transfer student pathways. IEEE Transactions on Education, 64(2), 117-123. https://doi.org/10.1109/TE.2020.3015090
  22. Reeping, D., & Grote, D. (2022). Characterizing the curricular complexity faced by transfer students: 2+2, vertical transfers, and curricular change. 2022 ASEE Annual Conference & Exposition Proceedings, 41462. https://doi.org/10.18260/1-2--41462
  23. Reeping, D. (2026). CurricularComplexity: Toolkit for analyzing curricular complexity (Version 1.0.1). Retrieved Mar. 3, 2026, from https://cran.r-project.org/web/packages/CurricularComplexity/index.html
  24. Reeping, D., Rashedi, N., Setser, E., Padhye, S., Banerjee, A., Hodge, E., & Smith, L. (2026). CurricularComplexityData: Data for Exploring Curricular Complexity (Version 0.1.0). https://cran.r-project.org/web/packages/CurricularComplexityData/index.html
  25. Slim, A. Curricular analytics in higher education. The University of New Mexico (Thesis). 2016.
  26. Slim, A., Kozlick, J., Heileman, G. L., & Abdallah, C. T. (2014). The complexity of university curricula according to course cruciality. 2014 Eighth International Conference on Complex, Intelligent and Software Intensive Systems, 242-248. https://doi.org/10.1109/CISIS.2014.34
  27. Starkey, K. (2021). STEM Passport Program Literature Review. Western Interstate Commission for Higher Education. Retrieved Mar. 3, 2026, from https://www.wiche.edu/wp-content/uploads/2022/07/STEM-Passport-Program-Literature-Review-v1.pdf
  28. Thompson-Arjona, W. Curricular optimization: Solving for the optimal student success pathway. University of Kentucky (Thesis). 2019. https://doi.org/10.13023/etd.2019.147
  29. Waller, D. Organizational factors and engineering student persistence. Purdue University (Thesis). 2022. https://doi.org/10.25394/PGS.21606342.v1
  30. Wigdahl, J., Heileman, G. L., Slim, A., & Abdallah, C. T. (2014). Curricular efficiency: What role does it play in student success? ASEE. Retrieved Mar. 3, 2026, from https://peer.asee.org/curricular-efficiency-what-role-does-it-play-in-student-success
  31. Zulkifli, F. (2019). Systematic research on predictive models on students' academic performance in higher education. International Journal of Recent Technology and Engineering, 8(2S3), 357-363. https://doi.org/10.35940/ijrte.B1061.0782S319