Introduction

As the world transitions to a low-carbon energy system, deployment of non-dispatachable renewable power, such as wind and solar photovoltaic (PV), continues to expand1. In Fig. 1, we illustrate a simplified low-carbon energy system in which baseline generation is provided by intermittent renewables, and load-balancing is handled by flexible low-carbon fossil fuel-fired power generation2. Such energy systems are crucial for the supply of continuous energy to enable uninterrupted industrial and residential use3,4,5,6. A leading technology for dispatachable low-carbon power is combustion of fossil fuels with subsequent post-combustion carbon capture (PCC) and storage. In PCC and storage, CO2 is removed from the resulting flue gas to yield a clean flue gas, which can be vented to the atmosphere, and a high-purity CO2 product stream7. The latter can be compressed and sent to downstream processes for utilisation as a chemical feedstock8,9, or to be permanently sequestered in subsurface geological formations10,11. The highly dynamic nature of power plants operating in a load-balancing mode introduces variability in the flue gas stream in terms of composition, flow rate, and temperature12,13,14. Therefore, PCC processes need to be designed with a high degree of flexibility in mind to be robust under such conditions. This allows for fossil fuel power generation with PCC to become a reliable source of low-carbon dispatachable power in the emerging energy system15.

Fig. 1: Low-carbon energy system.
figure 1

A schematic representation of a flexible low-carbon energy system with intermittent renewables (non-dispatchable power) and low-carbon dispatchable power. Power generation by fossil fuels is expected to provide a variable power output to match supply and demand of energy at any given time. Downstream post-combustion capture (PCC) is subject to operation under variable flue gas feed flow rate and CO2 composition.

The most mature technology for PCC is reversible chemical absorption into aqueous amine solutions16. These processes are able to produce a high-purity CO2 product (≥95%) at high CO2 recoveries (90–99%). However, there are several practical challenges associated with the operation of these processes which lead to economic and environmental concerns, such as the large thermal energy requirement for regenerating the rich amine solution17, thermal degradation of the solvent at the process operating conditions18, and solvent losses owing to volatility19. As a result of these challenges, research efforts have been dedicated to exploring alternative technologies for carrying out the separation.

Adsorption-based processes represent an attractive alternative to amine-based absorption for PCC, primarily owing to the comparatively low energy penalty associated with regeneration of the sorbent20. In an adsorption-based process, a column is packed with pellets of a solid adsorbent material which is highly selective for concentrating CO2 at its surface. An adsorption column is operated by cyclically varying the operating conditions to capture high-purity CO2 on the solid surface, and subsequently release it in a controlled manner21. The most widely studied adsorption cycle for PCC is pressure-vacuum swing adsorption (PVSA)22,23,24,25, where adsorption is carried out at high pressure, and the CO2 product is extracted from the bed using a vacuum. The design of PVSA carbon capture processes is typically conducted to minimise the cost per tonne of CO2 captured, while satisfying constraints on the effectiveness of the separation26,27,28,29. To be suitable for geological storage, the CO2 product stream must have a purity of at least 95%30,31. It is generally an accepted standard, originally proposed by the United States Department of Energy (DoE), that processes should attain a CO2 recovery of at least 90%32. This approach for the design, according to minimisation of the capture cost, does not consider any aspects related to the flexibility of the process.

There is a clear need to move beyond the academic status quo of designing for static, idealized scenarios and incorporate flexibility into the design to ensure the long-term viability of these processes throughout the energy transition. Firstly, there is currently very little available literature on the flexible operation of PVSA processes applied to PCC33. In addition, we do not currently understand well how these processes respond to transient operating scenarios. Third, with analogy to other chemical processes, we expect there to be a trade-off between flexibility and process economics34,35,36. Rationalising this trade-off is key to enabling the industrial adoption of adsorption-based carbon capture processes.

In this work we take an initial step towards a comprehensive analysis of process flexibility by quantifying it for a nominal flue gas feed scenario. Understanding the flexibility of the process at a nominal set of feed conditions is the underpinning element to ultimately integrating process flexibility into the design process. The approach utilises a high-fidelity mathematical process model for the separation of CO2/N2 by PVSA and an associated techno-economic assessment to quantify the performance of PCC from a typical coal-fired power plant37. We have coupled the mathematical model with a framework for identifying the design space for which the process can be operated while satisfying commonly adopted performance targets on the CO2 product stream38. For any chosen operating strategy, the flexibility of the process can be quantified with respect to the design space boundary. In the following, we calculate and compare the flexibility resulting from several proposed PVSA design approaches. We find that the proposed direct design space approach is effective and efficient and provides a rich set of information around the process flexibility, which is not given through classical process design. Importantly, this approach lays the groundwork for future studies on investigating wider operating ranges and feed conditions. This work aims to demonstrate the integration of quantitative process flexibility into the design of carbon capture processes, which is essential in tackling climate change challenges.

Results and discussion

Case study: coal-fired power plant with CO2 capture

In this work, we have analysed a four-step PVSA process, with feed pressurisation, applied for PCC from a typical coal-fired power plant. The flue gas eluted from the power plant is considered to be a dry binary mixture of 15%CO2/85%N2 at a pressure of 1 bar and a temperature of 298 K. We consider an adsorption process utilising a packed bed of zeolite 13X adsorbent, a commercially available and widely studied material for PCC. The considered power plant has a typical specification, with a gross electrical power output of 1000 MW and producing approximately 9 Mtn yr−1 of CO2 emissions to the atmosphere. Full details of the process modelling and economic assessment conducted for this case study can be found in the methods Section. Note that in this work we consider a fixed base-case design and aim to study how this fixed design case responds to variations in the operational variables. As such, our observations retain general applicability, irrespective of the chosen nominal values of the design parameters.

Process behaviour in the knowledge space

The problem is formulated as follows. In the first step, the design decisions (DDs) for the system are identified. These are the design parameters and/or operational variables which will be varied to understand the flexibility of the process. Here, we consider a typical PVSA design problem, where we use the feed velocity (vF), the high-pressure (pH), and the intermediate pressure (pI) as DDs. We note that we have not considered the cycle step durations (tads–adsorption time, tbd−blowdown time, and tevac−evacuation time) of the adsorption process in this work. For deployment of adsorption-based PCC at scale, the volume of flue gas that needs to be processed is much larger than can be handled using a single adsorption column. Therefore, in practice, hundreds (or thousands) of non-interacting adsorption columns are scheduled in parallel to yield an overall system, that processes flue gas on a continuous basis, and at the required scale26,37. Since the cycle step durations are required to be specified ahead of time in the scale-up of the system, one does not have the freedom to dynamically vary these, as this would compromise the ability of the overall system to continuously accept the full volume of flue gas being produced by the power plant. This system-level constraint is in contrast to the operation of a single non-interacting adsorption column, where one has the freedom to freely vary the cycle step durations to control the performance of the process operation39. Additionally, we note that previous work has shown that the low pressure (pL) strongly impacts the performance of the PVSA process40. However, there has been no in-depth assessment made in the literature on practically achievable bounds on the low pressure at a large scale. Therefore, here we have opted to fix the low pressure at a commonly reported value25,30,37,40.

Upon varying the DDs of the system, we monitor the purity, recovery, energy consumption, productivity, and capture cost as key performance indicators (KPIs). In the second step, the bounds on the DDs are identified and used to generate the knowledge space (KSp). The KSp defines the sub-space of the entire design decision space for which we perform quasi-random sampling to probe the behaviour of the process. We have used the Sobol sequence to sample the design decision space and generate the KSp by taking 4096 quasi-random samples. Of these samples, 3458 satisfied the feasibility constraint on the operating pressures (pH > pI). For each of the sampled points, we record the DDs and respective KPIs of the process generated by running the process model and economic assessment.

Approximated cost-optimal process design

Before performing design space identification and assessing the flexibility of the process, the KSp data set is used to approximate Pareto optimal frontiers of the process performance, and to approximate the cost-optimal operating point given the process performance constraints. As a benchmark of optimality, we have deployed the widely used non-dominated sorting genetic algorithm II (NSGA-II) to conduct rigorous process optimisation. Using the NSGA-II routine, we have calculated Pareto fronts of unconstrained purity/recovery and constrained productivity/energy usage, as well as the constrained cost-optimal point. For the constrained problems, we require that the process produces CO2 with purity ≥95% and recovery ≥90%, in compliance with commonly adopted performance targets used in previous studies on PCC32,41,42,43,44, including benchmark studies by the US Department of Energy45. In Fig. 2, we provide a comparison between the optimal process behaviour identified using NSGA-II (solid lines), and that obtained by sampling the KSp using the quasi-random Sobol sequence (symbols). The labelled points in each panel show the position in each Pareto plane of the cost-optimal point obtained by each method (blue: Sobol sampling, red: NSGA-II). We can see that there is excellent agreement between the Pareto fronts generated by each approach, validating the use of quasi-random sampling for the purposes of initial identification of the optimal process performance. In Table 1, we present the values of the DDs and minimum capture cost associated with the cost-optimal point identified using each approach. Because all the relevant KPIs are computed for each point of the Sobol sequence, the optimal solution is readily found by identifying the sample that has the lowest cost and that meets the purity-recovery performance targets. Again, we can see that the solutions are very similar, with the Sobol sampling approach obtaining a solution with a minimum capture cost deviating from the optimum by only 1.1%. This is particularly impressive when considering the computational cost of each approach. In total, 15,120 forward simulations are performed for solving all three optimisation problems to obtain the Pareto fronts and cost-optimal design (Fig. 2a, b) using NSGA-II. In comparison, only 3458 forward simulations were required for Sobol sampling. As such, while it doesn’t assume that an optimal solution is known beforehand, the Sobol sampling relies on having enough samples within the KSp. Under these circumstances, the Sobol sampling approach becomes an efficient and effective means to approximate the optimal performance of the adsorption process. Further to this, the outputs of Sobol sampling can be used to generate a rich set of data around the flexibility of the process operation, as will be demonstrated below.

Fig. 2: Pareto front comparison between formal optimisation (NSGA-II) and Sobol sampling.
figure 2

a Unconstrained purity-recovery Pareto front. b Constrained energy-productivity Pareto front. In both a and b, the solid lines correspond to the NSGA-II Pareto fronts, while the scattered points correspond to Sobol sampling. The cost-optimal design of optimisation using NSGA-II is highlighted as a red square in each Pareto plane, and that of the Sobol sampling is a blue circle. The corresponding design decisions and KPI values are summarised in Table 1.

Table 1 Design decisions and minimum capture cost for the cost-optimal solution from NSGA-II and Sobol sampling

Quantification of acceptable operating ranges

Following the design space identification framework established in ref. 38, an artificial neural network (ANN) is trained and used as an interpolator to increase the resolution of the KSp data set (see methods Section). Based on this high-resolution data set, we identify the design space via alpha shapes–as illustrated in Fig. 3. The shaded region (grey) in Fig. 3 is the design space whereby operation within it is guaranteed to satisfy the purity/recovery constraints. The mathematical definition of the design space enables quantitative analysis of flexibility with respect to any nominal operating point (NOP). Starting from an NOP of interest, a cuboid is formed and expanded outwards until one of its vertices hits the design space boundary. This forms an acceptable operating region (AOR) centred around the NOP. The side lengths of the AOR correspond to the multivariate proven acceptable range (MPAR) for each design decision, for which operation is guaranteed to be within the design space. Here, we investigate the cost-optimal design obtained from the Sobol sampling as the NOP of interest and identify its AOR, as shown in Fig. 3. We can see that the cost-optimal design has very low flexibility, as the AOR is not visible with reasonable axis scaling in Fig. 3. As it can be anticipated from the position of the cost-optimal design in the purity/recovery plane (Fig. 2a), there are active constraints at the cost-optimum point which cause this point to lie very close to the edge of the design space. This means that, in practice, the cost-optimal point would present low operability in the presence of disturbances to the nominal operation. The acceptable operating range is very limited, and the smallest disturbance in feed conditions (±0.11% with respect to nominal values of the DDs) would result in a violation of operational constraints.

Fig. 3: Identification of the design space (DSp) and quantification of the acceptable operating region (AOR).
figure 3

The design space (DSp) representing the region of the KSp for which combinations of the design decisions satisfy CO2 purity ≥95% and recovery ≥90% is shown as the shaded grey region. Points in the quasi-random sample which satisfy the constraints are shown in orange (Sat). Points which do not satisfy the constraints have been excluded for clarity. The nominal operating point (NOP) and corresponding acceptable operating region (AOR) for the cost-optimal design (blue) and the maximum flexibility design (green) are provided. The circle corresponds to the NOP and the box corresponds to the AOR.

Through the application of the design space identification framework, we are able to quantitatively ascertain that the status-quo design approach for PVSA processes yields an inflexible design with low operability in practice. At this stage, we seek to understand if there is scope in the design workflow to accommodate flexibility. More specifically, we aim at understanding how much flexibility can be allowed for in the design, and what impacts the accommodation of flexibility has on the other design outcomes—such as process efficiency and economics. To analyse this in detail, we will study two further design cases using the design space identification framework and the existing KSp data set. First, we will consider the “relaxed cost-optimal design"—a case which maintains the original cost-optimal Sobol sampling point, but allows for some relaxation of the recovery constraint to handle disturbances. Second, we will consider the “maximum flexibility design"—a case that maximizes flexibility within the original design space. In the following, we analyse each of these cases in further detail, and subsequently compare their performance.

Flexibility by constraint relaxation

In the design of PCC processes, the CO2 product stream, which is extracted from the flue gas, must be of sufficient purity (≥95%) to be suitable for geological sequestration. This design constraint is non-negotiable and cannot be violated, even for the purpose of improving process flexibility. However, the recovery constraint is only a target, and can potentially be violated if there are other operational benefits associated with doing so. Here, we study the case where we relax the recovery constraint to investigate the trade-off between flexibility and performance. The low discrepancy Sobol sampling coupled with the ANN surrogate model allows for the rapid assessment and identification of design spaces with different performance constraints. No additional high-fidelity model simulations are needed to characterise a new design space under different performance constraints. This showcases the usefulness of the design space identification framework for efficiently exploring different design options to attain process flexibility.

Fig. 4 shows the design spaces identified with different recovery targets, ranging from 85–90%. As we can see, the design space expands predominately in the direction of increasing pH as the recovery constraint is relaxed. When relaxing the recovery constraint to 89% (1% decrease from nominal target), the size of the AOR formed increased by 4 orders of magnitude (Fig. 4a, b). The relaxation of the recovery constraint from 89% to 88% yields a further increase of the AOR (Fig. 4b, c), while the latter does not change upon relaxing the constraint down to 85% (Fig. 4c, d). This behaviour can be explained by the change of the active constraint from recovery to purity. Recall, from Fig. 2a, that the obtained cost-optimal point from the Sobol sequence lies relatively further away from the purity constraint compared to the recovery constraint. This indicates that the recovery constraint is active, and is further proven by the increase in AOR size when the recovery constraint is relaxed. When the active constraint switches from recovery to purity, the region cannot expand further without violating the purity constraint because the identified AOR is centred around the cost-optimal design. It is noteworthy that if the cost-optimal solution from the NSGA-II were to be investigated as the NOP, both the recovery and purity constraints would need to be relaxed to see an increase in flexibility, as both constraints are active at that particular solution.

Fig. 4: Design spaces identification with progressive relaxation on the recovery constraint.
figure 4

a Design space with the original recovery constraint (≥90%). bd Design space with relaxed recovery constraint of 89%, 88%, and 85%, respectively. The design space for each case is shown as the shaded grey region. The nominal operating point and corresponding acceptable operating region for each case are shown in blue. The quasi-random sampled points satisfying the constraints of each case are shown.

In Fig. 5, the three relaxed cost-optimal designs are compared quantitatively. As anticipated above, the AOR size increases substantially (×4.5) when the recovery constraint is relaxed from 89% to 88%, whereas the relative increase is quite modest (7%) with a further relaxation of the constraint down to 85% (Fig. 5a). The corresponding MPARs of the different cases are summarised in Fig. 5b. Generally, all three cases have relatively low flexibility, whereby a maximum disturbance of ≤4.5% on the three operating variables (vF, pH, and pI) can be accommodated by the design. The fact that the percentage acceptable ranges are equal across the three DDs, to one decimal place, is a coincidence. This alignment can be attributed to the spatial positioning of the cost-optimal point relative to the boundaries of the design space. This is not always the case, as shown in the flexible design case (Fig. 6b). In Fig. 5c, we show the corresponding distributions of the process KPIs within the AOR. We find that the KPI distributions for the 88% and 85% cases are essentially overlapping, in accordance with these two cases having virtually identical AORs. Not surprisingly, the mean of the distribution of recovery is reduced with increasing relaxation of the recovery constraint, and the distribution becomes more skewed towards lower recovery values. On the contrary, the distribution of purity expands more symmetrically, and its mean remains the same for each design (including the original cost-optimal design). This observation supports our previous hypothesis that purity becomes the active constraint in relaxed designs. The behaviour of the mean of the distribution of the energy usage and capture cost (both increasing with relaxation) appears less intuitive, as one would expect the relaxed scenarios to consume less energy. However, these scenarios carry higher specific energy demand and cost, because less CO2 is captured. Yet, the 88% recovery case has a mean capture cost of 62.47 $ tonne−1 which is only 2% higher than the cost-optimal solution, while offering an AOR that is 63% larger than that of the 89% recovery case. Hence, this could represent an effective method for increasing process flexibility.

Fig. 5: Flexibility metrics comparison between three relaxed cost-optimal design cases.
figure 5

a Acceptable operating region (AOR) volume. b Multivariate proven acceptable ranges (MPARs) of the design decisions. c Distribution of all monitored KPIs within the identified acceptable operating region. The dashed line represents the mean value of the KPIs, while the blue dash dot line shows the KPI value at the cost-optimal point.

Fig. 6: Flexibility metrics comparison between the maximum flexibility design and the relaxed cost-optimal design.
figure 6

a Acceptable operating region (AOR) size. b Multivariate proven acceptable ranges (MPARs) of the design decisions. c Distribution of all monitored KPIs within the identified AOR. The dashed line represents the mean value of the KPIs, while the blue dash dot line shows the KPI value at the cost-optimal point.

The framework has successfully quantified the impact of performance constraint relaxation on process flexibility. In this case, a relaxation of the recovery constraint to 85% is not considered to be a worthy trade-off in the design, given the diminishing return on process flexibility. It is also worth noting that while this strategy does allow for a moderate increase in the process flexibility, it also demands that there is some decrease in the amount of CO2 emissions captured from the power plant flue gas. Ultimately, at the industrial scale, such violation may not be desirable from an environmental perspective, as small changes in the recovery can lead to substantial increases in the absolute plant emissions.

Flexibility by design

We have used an iterative quasi-random grid search approach to find the design which gives the largest possible AOR, while meeting the original purity/recovery constraints. As shown in Fig. 3, we can see that this design that maximises flexibility lies at the centre of the design space. The capture cost of the flexible design is 70.16 $ tonne−1, which represents an increase of 12.5% relative to the cost-optimal design. Therefore, we can infer that there is an inherent trade-off in the design between capture cost and flexibility. This trade-off will need to be effectively rationalized in the design workflow to yield carbon capture processes that are both economical and operable. Notably, different from the relaxed design case, it will be shown in the following that the higher cost of this new design case is associated with an improved performance of the separation.

We compare in Fig. 6 the most flexible design and the relaxed cost-optimal design (recovery ≥88%). Details of the process scale-up and KPIs for these two designs can be found in Supplementary Note 11. The most flexible design offers an AOR that is 13-fold larger than that of the relaxed cost-optimal design (Fig. 6a). The larger AOR translates directly onto the width of the MPARs for pH, pI, and vF, as shown in Fig. 6b. The most flexible design can now accommodate larger variations in the DDs, namely of up to 10% (pH and vF) and 20% (pI). The cost-operability trade-off is visible also in the relative location of each design decision. For example, relative to the relaxed cost-optimum design, the flexible design has a higher high-pressure (pH), and a lower intermediate pressure (pI). The lower value of pI generates higher purity for the process by rejecting more nitrogen from the bed during blowdown. The higher value of pH increases the recovery of CO2 by increasing the CO2 affinity during the adsorption step. However, increasing the value of pH is achieved by operating the feed compressor at a higher pressure, increasing the electricity usage of the process. We thus expect the capture cost of the flexible design to be larger than the relaxed cost-optimal design.

The distribution of the KPIs for these two design cases offers additional insight into the cost-operability trade-off (Fig. 6c). Across all KPIs, the flexible design is shown to have larger distribution ranges. This is expected as a larger AOR encompasses more combinations of DDs. For example, the nominal productivity of the two designs is similar (~2 mol m−3 s−1), but the most flexible design shows a standard deviation of 0.15 mol m−3 s−1, which is 2.5× larger than the value observed for the cost-optimal designs. Notably, the purity and recovery distributions for the relaxed cost-optimal design push against the respective constraints, whereas the most flexible design displays purity and recovery distributions shifted towards enhanced separation performance. To generate additional flexibility for the most flexible process design, the grid search algorithm needs to move the NOP away from the boundary of the design space. This means that the purity and recovery achieved by the process need to exceed the strict requirements to give space around the NOP. This is a key co-benefit of the most flexible design, wherein the flexibility of the process comes with a higher specific capture cost (13% higher) and specific energy consumption (20% greater)—but the performance of the separation is also increased. The most flexible design prevents a greater amount of CO2 emissions from the power plant from reaching the atmosphere and the outlet CO2 stream is purer and could therefore be suitable for use in a wider variety of downstream utilisation processes. We note from a practical perspective, as shown in Table S8, that the implementation of each process operating point at the industrial-scale is not substantially different. Therefore, the flexible design case can be implemented without a substantial increase in the required system complexity or land footprint.

Conclusion

We have analysed the operational flexibility of an adsorption-based post-combustion CO2 capture process through a design space identification framework. We discover that the design approach of minimising capture cost yields a process that is inherently inflexible. In practical scenarios of transient flue gas production, such a design would fail to meet the commonly adopted purity-recovery (95–90%) constraints on the produced CO2. We propose and compare two alternative design approaches aimed at introducing operational flexibility. First, we consider relaxation of the recovery constraint (down to 85%), while retaining the cost-optimal design as the NOP. This approach yields moderate flexibility (variation in the operating variables of up to 4.5%), but also demands that the capture rate of the plant decreases during disturbed operation (up to 7%). Second, we identify the operating point with maximum flexibility in the design space and observe that the process can accommodate variations in the operating variables of up to 10–20%. However, the capture cost increases by 12.5% relative to the cost-optimal design, because this added flexibility is achieved by exceeding the purity-recovery constraints.

The transition to a sustainable and reliable energy system will encompass a growing proportion of baseline generation provided by intermittent renewables with load-balancing handled by low-carbon fossil fuel-fired power generation. As such, the latter must be associated with a CO2 capture process that can accommodate disturbances in flue gas conditions resulting from the load-balancing requirements. Particularly, variability in the flue gas feed, including composition, flow rate, temperature and the presence of impurities, should be incorporated as DDs to fully understand the process flexibility under operation with variable and uncertain feed conditions. Here, we have addressed variability in the flow rate; this scenario is indeed very likely, because the relationship between the power output and the exhaust gas flow rate is linear46. Adsorption processes are typically modular in nature and implemented via parallel trains, meaning that this scenario can be addressed by adjusting the number of trains in use at any given time. Consideration of other variables (e.g., flue gas composition) within the presented framework will be challenging owing to the high dimensionality of the resulting design space problem. Nonetheless, this constitutes future work where methodologies to cater to the high complexity of multidimensional spaces are explored. The results herein demonstrate a trade-off between process economics and process operability, which must be effectively rationalised to integrate CO2 capture units in low-carbon energy systems. In this context, future work must consider the integration of constraints related to the process operability within the design framework to arrive at capture processes that both have an acceptable capture cost and are sufficiently flexible to handle highly transient flue gas production. Such information should be obtained by detailed model-based control studies using realistic process disturbance data as input to understand the required level of flexibility to arrive at a controllable process design and operational scheme.

Methods

PVSA process model

The high-fidelity mathematical model used to simulate the PVSA process analysed in this study was developed during our previous work37,47. In this sub-section, we provide a brief overview of the model formulation and implementation.

The model simulates the performance of a typical four-step pressure-vacuum swing adsorption cycle with feed pressurisation37,40. A schematic representation of the process cycle is provided in the Supplementary Note 3. The steps of the cycle are (1) high-pressure adsorption, (2) forward blowdown, (3) reverse evacuation, and (4) feed pressurisation. To model the adsorption column dynamics, we have described the flow of an ideal gas mixture through a packed bed of adsorbent pellets using the axially dispersed plug flow model. This is coupled with a solid-phase adsorption kinetics model which expresses the rate of adsorption into the packed bed as a first-order mass transfer process through the linear driving force approximation, assuming that the rate of adsorption is limited by the diffusion of adsorbate molecules in the inter-crystalline macro-pores of the adsorbent48 and adopting the modification of Hassan et al.49 to allow for extension of the expression into the non-linear region of the adsorption isotherm. The pressure drop in the column has been described using Darcy’s law. The energy balance equations describe several relevant mechanisms of heat transfer in both the gas phase and the column wall, including conduction, convection, heat released through exothermic adsorption, and heat losses to the environment. The resulting adsorption column model is a coupled system of partial differential equations (PDEs), which are provided in non-dimensional form in Supplementary Note 1. Definitions of the non-dimensional variables and dimensionless groups used in the model formulation are provided in Supplementary Note 2.

The governing balance equations are a coupled set of PDEs that describe variation in the column state variables as a function of both space and time. To solve the balance equations, the PDEs have been discretised with respect to space using the weighted essentially non-oscillatory finite volume scheme to yield a system of coupled time-dependant ordinary differential equations (ODEs)40,50,51. This set of ODEs has been solved in MATLAB using the ode15s solver. The integration of the equations is carried out subject to boundary conditions representing the four-step adsorption cycle37,40. The boundary conditions used to represent the 4-step PVSA cycle are provided in Supplementary Note 3. To allow for efficient numerical solution, the balance equations have been solved in non-dimensional form. Further, we provide the stiff ODE solver with the Jacobian sparsity pattern of the system to allow for further efficiency improvements. We note that the system states in the ODEs are co-dependant, which we have handled in the solution procedure using a mass matrix approach. Due to the cyclic nature of the adsorption process, the balance equations must be integrated several times until a cyclic steady state (CSS) is attained, defined by a material balance error of <0.5% over the previous five cycles for both CO2 and N2. We therefore define CSS using52:

$$100 \% \times \left| \frac{\left({n}_{i}^{{{{{{{{\rm{in}}}}}}}}}-{n}_{i}^{{{{{{{{\rm{out}}}}}}}}}\right)}{{n}_{i}^{{{{{{{{\rm{in}}}}}}}}}}\right| \le 0.5 \% ,\quad \forall i$$
(1)

Once CSS has been attained, the trajectories of the state variables as a function of space and time across a complete cycle can be used to calculate the performance of the adsorption process through the evaluation of KPIs. We have used the purity \(({{{{{{{{\rm{Pu}}}}}}}}}_{{{{{{{{{\rm{CO}}}}}}}}}_{{{{{{{{\rm{2}}}}}}}}}})\) and recovery \(({{{{{{{{\rm{Re}}}}}}}}}_{{{{{{{{{\rm{CO}}}}}}}}}_{{{{{{{{\rm{2}}}}}}}}}})\) of the extracted CO2, the process productivity (Pr), and the specific energy usage (ET) as KPIs, which are calculated as:

$${{{{{{{{\rm{Pu}}}}}}}}}_{{{{{{{{{\rm{CO}}}}}}}}}_{{{{{{{{\rm{2}}}}}}}}}}\,( \% )=100\times \frac{{n}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,out}}}}}}}}}^{{{{{{{{\rm{evac}}}}}}}}}}{{n}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,out}}}}}}}}}^{{{{{{{{\rm{evac}}}}}}}}}+{n}_{{{{{{{{{\rm{N}}}}}}}}}_{2}{{{{{{{\rm{,out}}}}}}}}}^{{{{{{{{\rm{evac}}}}}}}}}}$$
(2)
$${{{{{{{{\rm{Re}}}}}}}}}_{{{{{{{{{\rm{CO}}}}}}}}}_{{{{{{{{\rm{2}}}}}}}}}}\,( \% )=100\times \frac{{n}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,out}}}}}}}}}^{{{{{{{{\rm{evac}}}}}}}}}}{{n}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,in}}}}}}}}}^{{{{{{{{\rm{pres}}}}}}}}}+{n}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,in}}}}}}}}}^{{{{{{{{\rm{ads}}}}}}}}}}$$
(3)
$$\Pr \ ({{{{{{{\rm{mol}}}}}}}}\,{{{{{{{{\rm{m}}}}}}}}}^{-3}{{{{{{{{\rm{s}}}}}}}}}^{-1})=\frac{{n}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,out}}}}}}}}}^{{{{{{{{\rm{evac}}}}}}}}}}{{V}_{{{{{{{{\rm{bed}}}}}}}}}\cdot {t}_{{{{{{{{\rm{cycle}}}}}}}}}}$$
(4)
$${E}_{{{{{{{{\rm{T}}}}}}}}}\,({{{{{{{\rm{kWh}}}}}}}}\,{{{{{{{{\rm{tonne}}}}}}}}}^{-1})=\frac{{E}_{{{{{{{{\rm{ads}}}}}}}}}+{E}_{{{{{{{{\rm{bd}}}}}}}}}+{E}_{{{{{{{{\rm{evac}}}}}}}}}+{E}_{{{{{{{{\rm{pres}}}}}}}}}}{{m}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}{{{{{{{\rm{,out}}}}}}}}}^{{{{{{{{\rm{evac}}}}}}}}}}$$
(5)

Full details of the calculation of all process KPIs are provided in Supplementary Note 4. The adsorption equilibrium of CO2/N2 on zeolite 13× has been described using the extended dual-site Langmuir isotherm model. The isotherm model and associated simulation parameters are provided in Supplementary Note 5. The simulation parameters used as inputs to define the base-case used in this work are provided in Supplementary Note 6.

Economic assessment

The PVSA process model has been coupled to a detailed economic assessment to evaluate the cost per tonne of CO2 captured for post-combustion CO2 capture from the flue gas of a typical 1000 MW coal-fired power plant. The cost per tonne of CO2 captured is calculated as:

$${C}_{{{{{{{{{\rm{CO}}}}}}}}}_{{{{{{{{\rm{2}}}}}}}}}}^{{{{{{{{\rm{cap}}}}}}}}}\,(\$\,{{{{{{{{\rm{tonne}}}}}}}}}^{-1})=\frac{{{{{{{{\rm{Total}}}}}}}}\,{{{{{{{\rm{annual}}}}}}}}\,{{{{{{{\rm{cost}}}}}}}}(\$\,{{{{{{{{\rm{yr}}}}}}}}}^{-1})}{{{{{{{{{\rm{Re}}}}}}}}}_{{{{{{{{{\rm{CO}}}}}}}}}_{{{{{{{{\rm{2}}}}}}}}}}\cdot {\dot{m}}_{{{{{{{{{\rm{CO}}}}}}}}}_{2}}^{{{{{{{{\rm{emitted}}}}}}}}}({{{{{{{\rm{tonne}}}}}}}}\,{{{{{{{{\rm{yr}}}}}}}}}^{-1})}$$
(6)

Here, we briefly outline the three-stage economic assessment procedure used to calculate the cost per tonne of CO2 captured. Full details of the process scale-up and cost estimation procedure are provided in Supplementary Note 9. First, the adsorption process is scaled up from the operation of a single adsorption column, as simulated by the process model, to a full-scale set of parallel columns that is able to continuously accept the full volume of flue gas produced by the power plant, by scheduling individual adsorption columns into a number of parallel trains26. Second, the capital cost of the process equipment is estimated from correlations available in the literature53. We consider the major equipment costs, which are those of the adsorption columns, compressors, and vacuum pumps, to calculate the total direct cost (TDC) of the equipment54. We use a bottom-up approach to convert the TDC to the total capital required (TCR), which additionally accounts for costs associated with developing the project and allowing sufficient contingencies26. We then apply the capital recovery factor method to calculate the equivalent annual cost (EAC), which can be used in conjunction with the capture rate to calculate the cost of capital per tonne of CO2 captured28. Finally, we calculate the operating cost of the process through the contributions of electricity usage by the compressors and vacuum pumps, as well as the cost of annual adsorbent replenishment to account for continuous sorbent degradation54,55. We combine the annualised capital cost and the operating cost together to calculate the overall cost per tonne of CO2 captured. Details of the case study parameters and definitions of all key economic inputs for the cost estimation procedure are provided in Supplementary Notes 7 and 8.

Process optimisation

When conducting formal optimisation of the performance of the PVSA process, we have deployed the non-dominated sorting genetic algorithm II (NSGA-II). We have utilised an implementation of the NSGA-II routine, which is available as the ga function in the MATLAB Global Optimisation Toolbox. When executing the NSGA-II routine, we have used a population size of 72 and we run the algorithm for a maximum of 70 generations. The DDs and associated parametric bounds used during optimisation of the process performance can be found in Supplementary Note 10.

Design space identification

The design space identification framework deployed in this work was originally developed in a previous work38. The framework comprises three main steps: 1. problem formulation, 2. design space identification, and 3. flexibility assessment. In step 1, a process of interest is characterised using mathematical models that can be of any form (e.g., mechanistic, data-driven). Then, the DDs that are of interest are defined. Depending on the objective, these can include design parameters, operation variables, and process disturbances. Next, are the monitored key KPIs, feasibility constraints, and performance constraints.

In step 2, the quasi-random Sobol sampling is performed as per the problem definition to generate the KSp. From this data set, a neural network is trained and used to increase the resolution of the data set. Then, the constraints are imposed to characterise the satisfied points and violated points. A bisection-based algorithm is then used to calculate an alpha shape with an alpha radius that characterises the design space without violations inside of it. Finally, in step 3, the identified design space is utilised to quantify AORs with respect to any NOP. This is also done using a bisection-based algorithm where we expand a cuboid region from the NOP of interest until one of the vertices hits the edge of the design space. Based on this, the MPARs can be extracted, and distributions of the monitored KPIs can be evaluated. We have developed an open-source Python package that can be used for steps 2 & 3 of the framework (’dside’: https://github.com/stvsach/dside).

Quasi-random sampling for KSp generation

In this work, the KSp generation is performed based on the quasi-random Sobol sequence56. The Sobol sequence is used to generate the combinations of DDs in Python using scipy.stats.qmc.Sobol class from the scipy library. This can also be done using the SobolGSA software developed by Kucherenko et al.57,58. The sequence is a quasi-random technique that aims to reduce discrepancy within the sampled space. This enables the training of an ANN that comprehends the KSp as a whole and is capable of performing satisfactory interpolation. The sequence is used to generate 4096 design decision combinations, which are then scaled appropriately with respect to the design decision bounds of the study. Then, the feasibility constraint tied to the high and intermediate operating pressures (pH ≥ pI) is imposed to filter the infeasible input combinations. A total of 3458 input combinations remain after the screening and are used to run the high-fidelity model simulations in a parallel fashion. When sampling the KSp, the following parametric sampling bounds are applied; pH [1, 10] bar, pI [0.05, 5] bar and vF [0.1, 2] m s−1. We fix the remaining operating parameters of the system at the following values; pL = 0.03 bar, tads = 50s, and tbd = tevac = 100 s.

Data-driven resolution support

The design space is defined as a region within the KSp where all constraints are satisfied. The design space identification framework relies on the calculation of alpha shapes, which are geometrical hulls59 that can be used to define the design space boundary. In order to characterise design spaces with smooth boundaries and no internal violations, it is crucial to utilise a high-resolution data set38. To acquire a higher resolution data set, one could perform more sampling based on the high-fidelity model. However, this would incur substantial computational burden because the mathematical process model is computationally expensive and requires several minutes of CPU time to acquire a single data point. Therefore, we have deployed an ANN surrogate model to act as an interpolator on the data set generated by sampling the high-fidelity model for the purpose of increasing the resolution of the sampled data set to enable accurate characterisation of the design space. In this work, three separate neural networks are trained to predict the different KPIs, given the combination of DDs (pH, pI, and vF). The first ANN predicts purity and recovery, the second predicts energy consumption and productivity, and finally, the third predicts the capture cost. The inputs and outputs for all of the ANNs are normalised using the min/max method. The KSp data set was shuffled and split 90%–10% into training/testing sets, respectively. Each neural network uses a feed-forward architecture, with 3 hidden layers and 256 hidden units. Each hidden unit applies the rectified linear unit (ReLU) function to its inputs. The network architecture was determined by a grid search over the possible network architectures. The weight/bias parameters for the networks have been trained using the adaptive moments optimisation algorithm with a learning rate of α = 5 × 10−5 for 50,000 epochs. The training is performed in Python using the pytorch library.