Skip to content

Save Solvent and Samples with Compact Capillary Scale LC

May 15, 2025

Capillary scale liquid chromatography offers separations comparable to the more traditional analytical scale at a fraction of the solvent consumption. Herein we describe a compact capillary scale LC instrument. In order to be considered as a viable replacement for larger scale systems, a system must demonstrate its robustness and repeatability. Herein we will discuss key repeatability metrics including interday and intraday repeatability, gradient repeatability, and injection carryover. These metrics will provide valuable insights to those looking to integrate capillary scale separations into their workflows.

  • See how capillary scale chromatography can help your lab save solvent and sample.
  • Learn about compact capillary scale instrumentation and its current applications
  • View methods for the quantification of system repeatability, and discover how the Axcend Focus LC performs.

 

Webinar

 

 

Speaker

Sam

 


Samuel Foster, Ph.D.
Application Scientist
Axcend

Samuel Foster completed his Ph.D. in Pharmaceutical Chemistry from Rowan University in 2025. His research has focused on the development and application of capillary scale liquid chromatography instrumentation. He currently works at Axcend as an application scientist focusing on the development of chromatographic workflows for a variety of analyte classes including oligonucleotides, monoclonal antibodies, and drugs of abuse.


 

Transcript

Hi, everybody. Thank you all for coming. I'm Doctor Samuel Foster. I'm an application scientist here with Excel. And today I'm going to be discussing the repeatability of our capillary scale LC instrumentation platforms. So to start out I want to give a discussion on some of the platforms that we offer for capillary scale liquid chromatography. Then I want to move into the growing need for repeatable instrumentation and start to define sort of the repeatability metrics that we want to achieve.

We're then going to look at both intraday and intraday repeatability studies as well as injection carry over on some of our newest auto samplers.

So start off with extend offers not just our portable focus LC, but we've grown to now offer the smallest full stack system in the industry. We have you can see on the right there an auto sampler on top, the full HPLC in the middle, and a diode array detector at the bottom. Now, this is not only a small form factor LC, it's a capillary scale LC, which means considerable solvent savings, which we'll get into and quantify a little bit later.

On top of that we also have lower sample volumes needed. And so this allows for not just smaller chromatography to be performed in more instruments for given bench space. It also has savings and benefits, in terms of sample consumption and solvent consumption.

So to start off with, I want to take a look at the key HPLC, the extended focus LC this is a gradient system capable of achieving pressures of 10,000 or 670 bar, which is, well in range for, most, if not all commercially available columns. We have a ten hour battery life, so this is a fully portable system.

You can carry it around, and move it from bench to bench. We allow for easily replaceable cartridge is capable of setting most commercially available columns. This also comes with the column oven and single wavelength LED detectors, allowing for, really robust and customizable sample, and chromatographic analysis. Additionally, we are able to carry our entire volume right here on the front.

You can see those three vials on the front. That's your mobile face and your waste. And these are only 15 milliliter vials. That being said, because of the capillary scale flow rates, these vials will last a considerable amount of time. In fact, most of the time I'm changing them out because they've gone old rather than they've, actually run out.

And so it's a big shift from the, you know, liter vials on top of traditional big box locks down into these 15, milliliter solvent vials. The HPLC isn't the only thing that we use. We have the extend in focus. This was designed for continual process monitoring and online reaction monitoring. It allows for some pretty robust, sample handling and various different, sampling and processing techniques.

It allows for, online filtration. We're working on dilution reaction quenching. And so this is a solution for an automated workflow if you want to, continually monitor a reaction batch. And because it's capillary scale, we're able to drop only very small microliter Aliquots of sample to perform these, separations with. And so this is a very versatile and very robust, sort of sampling system for really long, continuous and automated processes.

Next, this is one of our newest, products. This is the extend autofocus. This is a fully customizable, auto sampler. It allows for the standard 96 well plate, as well as a well plate of 42 millimeter vials. It has very, robust and repeatable, sample handling, with, with very low carryover. And we're going to get into some quantification and some carryover quantification of that later on in this talk.

Additionally, it's temperature controlled. We're able to get, samples down as low as eight degrees Celsius. So we're compatible with a lot of the existing methodologies and existing workflows. Finally, our last product is the extended diode array detector, the focus array. This is the only DAB for capillary scale liquid chromatography. This system is great because rather than the traditional single wavelength cartridges that we, offer, this allows for full spectrum analysis.

You're able to perform, you know, changing wavelengths, collecting the spectrum mid-run to allow for, peak identification based on spectrum or potentially for, different wavelength ratios or absorbance ratios to process, potentially unresolved peaks. And so this puts us now on par with the traditional full stack. HPLC is at a much smaller and capillary scale form factor.

Now I've mentioned a bunch of times just that this system is a lot smaller in terms of bench space. But this really kind of shows how much smaller. So on the right there we have our auto focus and focus LC stack compared to a big box legacy LC. And already you can see that size wise it's much smaller, but it's also much lighter.

That full stack weighs about 30 pounds compared to the 116 pounds of the B big box legacy LC, and so this offers, a level of portability where rather than, say, re validating a method on a different instrument, if you had to bring it to a different lab, you could simply bring the entire system with you and skip the need for that revalidation.

Additionally, because we're in that capillary scale, we can cut our flow rates down by a considerable amount. We typically operate anywhere from 1 to 10 microliters a minute. For analytical scale, we're typically operating, between point one and five milliliters a minute. And so we're talking about very, very dramatic reductions in overall solvent consumption. We tend to use columns between 0.15 and 0.5 millimeter ID, rather than the analytical scale, 1.0 to 4.6 millimeter columns.

And we also tend to use less, total sample rate. We tend to do anywhere from 4 to 40 nano liter injections. I've gone as high as one microliter injections, whereas on the big box legacy LC, we tend to start at one microliter injections and really only go up from there. And so not only are we saving solvent, but in sample limited to bring up a very expensive sample or don't have a lot of it.

This also is very, very useful because you can get more injections out of a single sample.

I've also mentioned a little bit about how, capillary scale has the solvent savings. But this I think, really hits home just how dramatic those solvent savings are. We had a collaborator over at Merck who did a study, and with their traditional big box HPLC, running at 8.8ml a minute for eight hours a day, five days a week.

Around the whole year, they generated about 100l of total sample. If you took that same method and scale it down to the capillary scale and performed on the focus, I'll lc you're able to do it with only about 200ml of sample, which if we look at the total costs for those two we go from $28,000 to $27.

So it is a dramatic reduction in overall cost in overall solvent consumption. It's a much greener technique. But I think for a lot of the business people out there, it's a much cheaper technique. And so that's really, one of the key benefits of swapping to this capillary scale.

Now I want to take a side, bar to look at what it actually means to have repeatable instrumentation. I think the word repeatability and robustness gets thrown out a lot. And so I want to take a second to, sort of define those and define the goals of what we're actually trying to achieve and what we're trying to demonstrate with this instrumentation.

So my guidelines define precision in three different levels. They define it as repeatability, which is a single sample run by a single analyst over the course of about a day. Then we have intermediate precision, which is if you take a different analyst and have them prep the samples themselves and run it over multiple days, that would be intermediate precision.

And then reproducibility is having, multiple analysts across multiple different laboratories perform the same sample preparation in the same analysis. We are necessarily going to look at the reproducibility. And we're only going to sort of dip our toes into intermediate precision, because a lot of the need to prepare it with different analysts comes from validating a method and a sample preparation method, rather than validating the actual

And so in this case, we're just doing runs over the course multiple days, but performed by a single analyst. Now how repeatable is repeatability? We want to always strive for the most repeatable data, possible. But the reality is it kind of depends. There's no one size fits all solution to how repeatable your instrument needs to be. I've seen monographs that allow for 5% RC, retention times and 25% RC on peak areas, and I've seen sub 1% RC monographs that, you know, require very, very high precision across all their peaks.

And so yeah, in reality, a lot of it boils down to the analyte, the field, what you're what you're measuring. And so there's no great solution. But that being said, the traditional literature values or what is repeatable, is below 2% relative standard deviation for your peak area and below 1% relative standard deviation for your overall retention time.

And so those are going to be the metrics, that we're looking to stay below. And in fact, we want to be as low as possible. But if we're below these two, we really feel good about a lot of that data that we're putting out. Now we're going to take a look at intra day repeatability. So this is going to be that, initial repeatability on the IC guidelines.

This is a single sample run multiple times over the course of a single day or a short period of time. So for this we decided to use an Agilent Eclipse Plus 18 column custom pack. It's a 0.3 by 100 millimeter column with 1.8 micron particles. We're running a four microliters the minute ISO chronically at a 6040 ratio of water to a see the nitrile?

We're doing single wavelength detection at 255 nanometers. And we have an injection volume of 40 nano liters. So a much smaller injection volume than what you would traditionally see. We also just elevated the temperature using our heated column cartridge to 35 degrees C, and we chose a sample file area C, open noon probe of known and all known at 50 parts per million, in a sample of 7030 water to a the nitrile over here we see a slight, a single one representing of that chromatograph.

But if we look forward here, we can see our 30 injection sequence over the course of a single day. What we're able to see is that across these four different peaks. So we see area that's our dead time marker. Then we see a set of known profile for known and material for known. We're able to see that these are very repeatable.

In fact, it's hard to sort of differentiate 1 to 1 because they all sort of just blend together into an individual peak. We're going to take a look at each one of these, a little bit closer and break down their retention time and their peak area, across this single base sequence. So starting out, we're going to look at our first leader, the final area we're sort of ignoring because we just we're using it as a bedtime marker here with the known because there's chromatography going on.

We decided that would be our first peak of interest. And what we see is we have, a very low relative standard deviation on both the retention time and the peak area comes out at about two minutes. And we have about 122 million views per second. The relative standard deviation is well under that 1% threshold. In fact, in this case, we can expect, about a quarter of a second relative standard deviation through these different three, 30 runs.

So it's a very repeatable solution. It really doesn't shift much. And the peak area is very maintainable. Looking over at Copia, finally, we again see its retention time. Relative standard deviation is very low at about 0.1 3%. Its peak area is a little bit higher. We're starting to push close to that 1% relative to standard deviation.

Part of that does come from just minor fluctuations in, how much sample we're loading onto the column and different detector responses. Part of that does come from the way we actually do integration. As your peaks get broader, you tend to start to lose small amounts of the edges. And so depending on how you define your integration thresholds, you do start to see, slight differences in there overall area relative standard deviation.

That being said, both of these are still well within our expected limits. We have well under one second of total fluctuation between these different peaks. And we're well below that 2% of the total peak area. Finally, with our last leader, we again see a very repeatable, 0.139% relative standard deviation, for its total retention time, our peak area retention or our peak area repeatability is just a little bit higher.

Now, we're at that, one about 1.5%, but that's still well within our 2% threshold. And again, a lot of that can boil down to slight variations and injection, slight variations in where we define the integration limits. So overall, we were very, very happy with this data as we, we were well below what we wanted. And we saw really repeatable chromatography going forward, though, no experiment is performed in gone entirely in one day and then never touched again.

So we need to make sure that our system works over the course of multiple days. And so we took this same, separation of these four components, and we performed it over the course of three different days, running five runs a day. And so the data you're going to see, is sort of the summary of five different runs across these three days.

So here we see our three day repeatability. We have day one in yellow, day two in purple and day three in blue. Again, it's incredibly hard to start to differentiate these out just because we are, so overlapped. They, they sort of blend together. We are going to take a closer look at some of these different peaks, and start to see if we are within the thresholds we expected, not just for a single day, but across multiple different days.

So for repeatability, we wanted to take a look at our latest community, because that one sort of had the most variance, in this case with the better for known. We had a retention time relative to standard deviation across a single day of, 0.183%, whereas across multiple days since we had less sample sizes, we're still below that 1%.

But we're at about 0.7%. So we're still within those repeatability thresholds. But because we're only using an N of 3 or 3 different sample sizes, any minor fluctuations start to take, a much larger effect. And so we were very happy with this as we were able to see not just intra day but entire day repeatability, taking a look at all the different peaks.

We don't necessarily need to go day by day and peak by peak. But here we have two different graphs showing the retention time relative standard deviations and the peak area relative standard deviations. What we can see is that across all three days, we never want to exceed our threshold of 2%. RSP for the peak area. And we never exceed the 1% for our rest on retention times.

This was really a great sign because even on an off day where we're having very high RSP on some sort of peak, we never once break the thresholds that we were hoping to set. And so we were we were very happy at how repeatable and reproducible this was. Finally, I just want to take a look at, some of the injection carryover.

As I had previously mentioned, we have our new auto focus, a full auto sampler that is comparable to some of the big stack, PLCs out there, and start to measure and quantify how repeatable its injections are and how reproducible. It has in terms of injection carryover, because that's a huge problem if you, don't do it right.

So I had previously talked a little bit about the auto focus. It is a comparable auto sampler to a lot of the traditional big box auto samplers. We're able to fit the standard 96. Well, plate or, well, plate of 42 milliliter vials. We have a, cycle time of about 30s, which, some of the fastest auto samplers I've seen could do in about 15.

So we're very on par. We have very good injection volume, precision of, less than 0.25%, which when you consider that we're operating with injection volume ranges between 0.1 and 50 Microliters. It's very, very, repeatable, especially with these small sample volumes. We do see a very small carryover, about 0.0 9% or less, which we'll discuss how that stacks up to the industry.

But that's very good. So how this sampler works is we use a system of two separate needles. We have a piercing needle which is typically metal in order to puncture the bile septum. Then we have a sampling capillary, which comes out of that with a much narrower inner diameter in order to manipulate these smaller volumes. And I want to draw attention to that because while this is really effective for pulling up and analyzing small amounts of liquid, it does run the risk of large carryovers, especially if the seal between the sampling capillary and the piercing needle isn't necessarily fully complete.

And so we're still in the process of, optimizing this and really getting it as low as we can. But I wanted to bring up that that is the potential challenge. But then we also are able to demonstrate it isn't a challenge with what we've been able to make. So to test the carryover and these are still ongoing studies, we took a massive caffeine sample in this case 1000 ppm.

You can see where it, you know, 1500 million or so. A gigantic slug, sent it through the system. And then afterwards we injected just a blank sample. And while we still do see a little bit of caffeine in there, we are very, very dramatically different scales. In this case, we're only 15 million versus 15,000. Nilla. And here's what that looks like when we stack them up.

And again, you really can't see anything from the blank. And looking into the literature for guidance on what acceptable carryover limits are. In this case we saw anything below 0.1% we thought would be acceptable. And here we demonstrated 0.0 9%. And so we were very happy that this, showed that even though we were using the, the, dual needle approach, we were able to really limit the carryover with proper washing and proper cleaning in between different samples.

Looking back and just a couple of conclusions we can draw from this, we're able to, perform repeatable chromatography, which is a key component of method development. Our capillary scale LC systems are able to provide really considerable solvent savings compared to analytical scale instrumentation. Over the course of our single day 30 injection sequence, we fell well below the 1% relative standard deviation for retention time and well below the 2% relative standard deviation for peak area.

Over the course of the three days, we actually fell below 1% for both the peak area and retention times, which we thought was, a really promising sign. And then for our auto sampling system, we fell below that 0.1% carryover threshold, which demonstrated that we were able to, properly clean and sufficiently automate that system. With that, I think we have a little bit of time for questions.

So if anyone has them, feel free to put them in the chat.

All right. I see one here. The question is, how customizable are the cartridges? Column oven and detector. And can, can users use third party columns or detection modules if desired. So yeah, they are fairly customizable. You're able to connect with basically any commercially available column. If they use 1/16 fittings, 360 of 1/32 fittings, we're able to fit columns anywhere from, 5 to 15cm.

In terms of, using different detection modules, there certainly does need to be an interface between them. We've connected to different aspects. You're able to connect with various different, single wavelength new the LED flow cells that we've developed, connecting with the, the dad and or the focus, the ad or the focus away.

You're able to get full diode array detection if you want further, different detectors. Well, we work with a number of different groups, and I don't want to say them, but, we are always happy to work with people and customize them. And in fact, it's, a rather easy integration to, to get, customize, detect these into these different columns.

And so you have a detector in mind. Please reach out to us and we'd love to work with you. What are the limitations or challenges when scaling methods from analytical to capillary scale on this platform? Yeah, that's a great question. So we just published a paper in the Journal of Chromatography Open discussing methods. And as a tutorial to translate, analytical scale methodology down to the capillary scale.

There's a couple of key factors to consider. So first and foremost is the column capillary scale LC is still sort of in its infancy when it comes to widespread commercial adoption. And so, not every column is offered commercially. That being said, there is a database of columns, made by Dwight. So the column selectivity database, in which you can put your existing column, and it will give you, suitable commercially available alternatives.

It's a, it's only free service. And we mentioned a lot in the paper and I, it's, great for solving that issue. In terms of translating the, the flow rate and injection volumes and things like that. There are a number of online calculators. And in fact, in the paper discusses the equations that you need to physically scale it.

It's, it's normally just the scaling factor. And you can reduce everything down, proportionately. After you solve that. The next question is how does the small solvent volume impact baseline stability and detector noise, especially in longer runs or low concentrations samples? Yeah, that's a great question. So with the smaller volumes you can run the risk of, fluctuating baselines.

It tends to be more prevalent when you're using piston based pumping systems because, you typically have to introduce pulse dampeners when the pistons kick over, because we tend to use, because we do use syringe based pumping systems. It generates pulses flow. And so you get much, much better baseline stability. In terms of your, your, sample detection at these lower concentrations, you know, we've been able to demonstrate sufficient, sensitivity all the way across three different orders of magnitude.

And so, it's comparable to traditional big box, LCDs. The question is, how does the retention time and peak area repeatability of beats and focus like compared to traditional analytical scale systems? I mean, it's comparable I don't, necessarily want to name names of different systems. We've tried it on, but we've seen, you know, very, very comparable, data.

There was a number of different papers published about this. This group has one, there was another one by our collaborators at Merck, and, both showed that it was comparable sensitivity and comparable, repeatability. The question is, what are the long term stabilities or what is the long term stability of the system beyond the three day testing period?

I haven't necessarily gone and quantified it specifically. I will say we've done a number of different studies and put them down and picked them back up six months or a year later. And they run identically. And we can, we can show and overlay the data. And it really is, a good match. And so while they don't necessarily have a number for it, it is still extremely repeatable beyond that three day test we carried.

All right. With that, I think we are out of time. So I want to thank everyone for coming. Please, if you have any questions, feel free to reach out to me or visit us at and Broadcom. And I really look forward to hearing from you. Thank you so much and have a good one. I'd like to give a huge thank you to today's speaker, Samuel, for sharing his knowledge and expertise.