ALGORITHMS AND DIVE COMPUTERS II

                        Imagine you’re piloting a small boat.   Your navigational equipment and skills are not really all they should be, but you’re still good.  You’re just following the coastline or sailing around an island.  If minor corrections to your course are needed from time to time, no problem.   You’ll be in approximately the right place as the first landmark comes into view.   You adjust your course, if necessary, and then on to the next landmark.   But what if there were no landmarks?  What if you decide to set sail from California to Hawaii, with no improvement in your navigational equipment or skills?  This time you have a real problem.  Without landmarks, over that long distance, your minor navigational deviations can build up until you are so far off course, you could bypass the entire chain of Hawaiian islands without catching so much as a glimpse of them.

 What does our imaginary boating scenario have to do with algorithms and dive computers?  As I mentioned in earlier posts, algorithms used in constructing dive tables, were primarily engaged in “smoothing” the Navy data.   Almost any algorithm could do this in some fashion; staying relatively close to existing data, it was hard to go very far wrong.  You can compare it to piloting the boat along the coastline.  The use of dive computers meant that decisions – predictions, really – were being made about a wide variety of dives, some of which were far removed from the profiles in dive tables.   An algorithm might work okay for dive tables but still be well out of its depth here.  That’s because, rather like navigating the trip to Hawaii, you have a long series of calculations where even small inaccuracies can build up to a very wrong final result.

 So, when your predictions are longer range, whenever you’re talking about a long  series of calculations – whether in navigation or in dive computer algorithms – accuracy is particularly important.  To construct a more accurate dive computer algorithm, you would need to see the full picture, or as much of it as is possible.  Ideally, you would want full and complete data sets covering all possible dive situations, particularly those where the probability of DCS is highest.  But, as mentioned in previous posts in this series, studies on humans in such high risk situations won’t, can’t, and probably shouldn’t be conducted.  How, then, do you fill in the huge missing part of the picture?

 What about existing data on submarine escapes?  Unfortunately, this data is not only very sparse, but involves scenarios so completely different from those common to the data we use (and, to some extent, completely different from each other as well) that they don’t really provide much help.

Venous bubble counts have been used, notably by DCIEM, with some success, but they too have limitations.   One limitation is that the correlation of bubble counts with DCS is not very strong (somewhat stronger for very high risk dives, weaker for low risk dives).  The greater limitation is the same one that affects the Navy data sets – you still can’t use high risk situations on humans.

 Looks like it comes down to animal studies.   There are several problems here:  The fact that animals do not, generally speaking, scuba dive is easily handled.   And physiologists are used to dealing with the scaling involved in comparing animal studies to human studies. But how DCS manifests itself in animals is a little trickier.  For one thing, they don’t discuss their symptoms.  And, it turns out, the symptoms do vary from one species to another.   Then you have the problems of which species to use and how to actually combine animal data with human data.

                                     WHY ANIMALS DON’T SCUBA DIVE

“Regulator hoses are too short”

“They never give me enough weights”

 

 

 

 

 

           

 

 

“In water? You’re kidding, right?

           

A paper by R.S. Lillo and others in the Journal of Applied Physiology used Hill equation dose-response models to successfully combine animal data with human data to look at DCS probability in saturation dives.   A saturation dive is one where the diver has been at the stated depth until fully saturated – in humans, a period of about 24 hours- and then does a direct ascent.  Successfully combining them means that a model using the combined data was better at predicting the results of a different series of human saturation dives (not included in either set of data used to predict it) than was the human saturation data alone.

On the graph below I’ve put in the DCS probability for saturations dives at 33 fsw, 40 fsw, and 50 fsw, that Lillo found using the Hill equation model.   On the same graph, I also show what results would be predicted for those same dives by SAUL, by a SAUL version that incorporates the effect of bubbles, by the Navy’s LE1 model, by a typical parallel (Haldane) 2-compartment model, and by the USN93 model.   (The Hill equation model and the USN93 model are each shown as points with their associated 95% confidence intervals.  Both SAUL models, the LE1 model and the Haldane-type model are shown as continuous functions.  The Navy’s LE1 model and their USN93 are essentially overlapping each other.)

saturation

One thing that I hope is immediately obvious is that both SAUL models come much closer to the Hill points than any of the other models.   What may take a few moments longer to notice is that the SAUL models are also the only ones that produce the same shape (called a sigmoid curve) as the Hill points would if they were joined.  What’s much less obvious is why this particular comparison between different models matters, since these saturation dives in no way resemble anything recreational divers might consider doing.

It matters for several reasons.  One is simply the general proposition that greater accuracy in general is good and likely to result in safer diving, even though these particular dives aren’t directly relevant.  Another reason is that the particularly high “hit” rates in these dives illustrates differences in accuracy more clearly.  But here’s what I consider the most important reason.  Saturation dives are the very simplest of dives – at least for decompression modelling.   The uptake of nitrogen is already complete.   Everything that happens now relates only to off-gassing.  (Unlike most other dives where the effects of both uptake and off-gassing must be accounted for.)    So all the DCS rates shown on the graph (both experimental and model-generated) are directly related to the off-gassing process.   And the off-gassing process appears to produce DCS rates in the form of a sigmoid curve.  With models, it is always the underlying structure of an equation that dictates what shape it will produce on a graph.   The underlying structure of an equation comes from the model the equation is trying to represent.   Because SAUL models use interconnected compartments, the rate equations representing them are multi-exponential and this will produce a sigmoid curve on the graph.  All the other models in the graph, being Haldane-based, use parallel compartments, so their rate equations are essentially single exponent equations and will produce something very close to a straight line on the graph.

The next few posts in this sequence will deal more directly with recreational diving and how SAUL relates to dive-related myths/anecdotal knowledge and to other models or decompression tables.

(Before we get to those posts, we may switch course briefly for a few “The Doctor is In” segments. )

 

Algorithms and Dive Computers

I’m still dealing with the question of : If different dive tables, algorithms, and dive computers all stem from essentially the same data, why are they so different?  When I left the previous post I said this one would start with the particular problems that arise when your data comes from living beings, and it will.  (I also promised this post wouldn’t be as long delayed as the previous one.  I’ve kept that promise – but just barely.  I’ll try to do better with the next post.)

The basic idea in doing studies is to have complete control over absolutely everything in the surrounding situation, so that the only differences in the outcomes will be as a direct result of the changes that you make in the situation.   (Good luck with that!)   That level of control is difficult enough with inanimate objects (coins, dice, widgets, or whatever).  Things that are alive are a large step beyond that.  There is always something else going on that you can’t control – often several things – whether or not you are directly aware of them.   If you’re lucky, these other things will have little or no effect on the outcomes.

When your studies involve people, things get even more complicated.  Not only are people less controllable – you can’t keep them under 24-hour supervision, regulate diet, select breeding stock, etc., as you can with lab rats – there are additional restrictions on acceptable outcomes.   An obvious example:  you can’t do a decompression study where 80% of the divers get the bends.   Even if your own moral compass wouldn’t preclude that, an ethics committee will.    While such restrictions are valid and necessary, they do tend to constrain the range of data that can be collected.   I’ll get back to this point a bit later (or in the next post).

Right now, let’s get back to the Navy data discussed in the previous post.   What use can be made of all that data?   The most pragmatic use is as a simple guide to the underwater workplace, both naval and commercial.   Dives that had resulted in few or no cases of DCS were deemed safe for use; dives with unacceptably high rates of DCS were deemed unsafe.   Out of this, the first Navy dive tables were born.  All the dives were essentially square profile. (I say “essentially” because, of course, dives requiring decompression were only square profile up to a point –  that point at which the first decompression stop began.)    The tables were constructed almost directly from the data. An algorithm was used mainly as a sort of “smoothing device” to keep the tables internally consistent and to fill in any points for which there was no data.

This “smoothing” is necessary because, as discussed in the previous post, the result of a study or experiment should be looked at as a range of possibilities rather than as an absolute answer, with the actual number found being merely the best estimate, in the absence of other information.   Here’s a clear example (not actual, but possible) of how other information could change your best estimate of a study’s results:   Suppose that, in a dive at a particular depth for 20 minutes, 4 out of 100 divers got bent.  Four percent would be your initial best estimate of DCS probability for that dive.   But suppose that, in another dive at that same depth, this time for 22 minutes, only 3 out of 100 divers got bent.  You would not accept 4% DCS as your best estimate for 20 minutes at that depth, and 3% DCS as your best estimate for 22 minutes at the same depth.   You would, in some fashion need to adjust your best estimates so that they made sense together.   The “smoothing” done by the algorithm is an adjustment of best estimates in the tables so that they all make sense together.

Take a break for a brief ”power nap”.

 

mergansersonlog3light

fruitbat                  puffer3

 

 

 

Okay, break’s over; let’s get back to work.

Even before we get to dive computers, and to dives that are more varied than those in the dive tables, we begin to see some of the reasons for different results coming from the same data.  One biggie: How much risk is an acceptable risk?  Is a 4% chance of DCS okay?  2%?  Less than 1%?   The other obvious source of difference is exactly how you adjust a large number of initial best estimates so that they make sense together.

In our fictional example above, someone might end up with new “best estimates” of 3.2% for 20 minutes and 3.4%  for 22 minutes while someone else might have it as 3.4% for 20 minutes and 3.8% for 22 minutes.  (Of course, if a 2% chance of DCS was the maximum acceptable, neither 20 nor 22 minutes at that depth would be permitted.   If 3.5% risk of DCS was used as the acceptable limit, one table might allow both dives while another would allow the 20 minute dive but not the 22 minute one.  If no adjustment had been done to the original best estimates, a table allowing a 3.5% maximum risk would allow the dive for 22 minutes, but not the one for 20! ).

These two choices – maximum acceptable chance of DCS and method of adjusting or “smoothing” the results – are the primary reasons for the real life differences you can see between the U.S. Navy Tables, the DCIEM tables, and the various other tables that exist.

For most commercial diving applications, dive tables, with their essentially “square profile” approach, were reasonably appropriate.  Divers would descend to the work site and remain at that depth until time to ascend.   As recreational diving increased in popularity, tables that assumed that a diver remained at a single bottom depth until ascending became awkward to use.   That was frequently not how recreational divers wanted to dive.

When I took my initial certification course, before diving computers were in use, we were told to use the tables in accordance with the deepest depth we descended to – even if we only stayed at that depth for a moment or two.   While it seems way too  conservative, without a known safe alternative, that was (and remains) the best practice.  At least it was simple.  The method for calculating times for repetitive dives from the tables (which you probably learned, but may have since banished from memory) involved turning over the tables after the first dive, categorizing yourself as A,B,C, etc., based on the nature of that first dive, the time elapsed since your ascent (changing, of course, as more time elapsed), then turning back to the tables, using your category to adjust the tables to get the allowable parameters for your next dive.  After a while, PADI developed a dive wheel that simplified the repetitive dive calculation somewhat.   Still, dive computers, when they finally arrived, were very welcome indeed.

The most significant change that came with dive computers – besides simplifying what the diver had to do – was the ability to continuously incorporate information from the depth gauge and the computer’s clock into deco calculations.  That made it possible to calculate multilevel dives differently from the “deepest depth” method.

But now the algorithms had a more complex job to do.  The nature of that complexity and how it’s dealt with will be covered in the next blog.

 

NDL and Decompression Tables, Algorithms, and Dive Computers

I realize it’s been far too long since my last blog post.   I have been working very hard,   getting SAUL ready for inclusion in a dive computer and attending to other scientific work.   I’ve also managed to get in a little relaxation and diving – Aloha from Hawaii, everyone.   I’ll try to keep the posts coming with a little more regularity now.

I promised some posts about SAUL’s validation, and these will be happening.  But before they do, we need a prequel of sorts – a little background information that every diver should know about decompression tables, NDL’s, algorithms, and dive computers.  What makes it hard, is that we divers are such a diverse lot.   Some of you will already know much of what’s in today’s post, and in greater detail.  To a very few of you, the information may seem entirely new.  Most of you will probably fall somewhere in between those two extremes.    So, let’s talk about..

NDL and Decompression Tables, Algorithms, and Dive Computers

You and your buddy are doing almost exactly the same dive.  His dive computer is telling him to surface; yours is still allowing the dive to continue.

Or you’re a commercial diver working different sites.  Some employers are using the U.S. Navy tables, others the DCIEM tables.  The tables differ in how long you’re supposed to work at a particular depth and, for decompression diving, the time spent at each decompression stop.

In recreational diving, some of you may deliberately use one dive algorithm (or dive computer) for certain types of dives and a different algorithm (or dive computer) for other types of dives, knowing they will differ in what they allow.  What’s going on here?

All dive tables, algorithms and dive computers are based on data from actual dives.  Until very recently, the largest data bank for this purpose, which will be referred to below simply as Navy data, had been produced by the U.S. Navy, in collaboration with DCIEM, and the Royal Navy.  (While the total amount of data collected in recent years under DAN’s PDE program greatly surpasses it, the Navy still holds the most systematically varied and organized databank.)   So, most dive tables, algorithms, and dive computers spring from the Navy data.   This leads to the obvious question: If they come from essentially the same data, why are they different?

We could try to finesse the question by comparing it to different artists painting the same harbor scene and producing very different paintings, but that’s not really a satisfactory comparison.  Paintings are art; dive tables and algorithms are supposed to be science.

Without getting into any esoteric questions like the exact natures of science, truth or meaning, we just want to know why the same huge data bank leads different scientists to different conclusions.   Some of the answer lies in how scientists think, in the theories they use to interpret data.    But a large part of the answer lies in the nature of all data, in the nature of data from living things, and in the particular complications involved in collecting data from people.

We’ll talk about data first.   Data in general is a collection of measurements made under specified conditions.  In the case of diving data, the measurements collected would essentially be the determination of  “bent” or “not bent”  or “niggles” (which refers to having mild symptoms of the bends, but the symptoms disappear on their own, without recompression therapy), while the specified conditions would be all the details of the dive.  For each set of specified conditions – e.g., 80 feet for 30 minutes bottom time, breathing air, coupled with a descent rate of 75 feet/min and an ascent rate for the direct ascent to the surface of 30 feet/min – you would see the total number of divers and the number who got  “bent” or “niggles”.  For each set of specified conditions, you could then work out the probability of getting “bent”.   (“Niggles”, usually considered a partial case of the bends, would be assigned an appropriate part-value:  less than the “1” reserved for being “bent” but greater than the “0” for “not bent”.   Typically, a niggle would count as one tenth of a hit.)

turtleb

Okay, it’s a little slow going – but we’re getting there.

So, you’ve got your data.  What does it tell you?  In the example above you would now know what percentage of the sample of divers measured under that particular set of  specified conditions got “bent”.   But, of course, you didn’t measure an infinite number of divers.  If you did the exact same study again, you might get a somewhat different percentage.  There are statistical methods that are used to determine how close the result you got is likely to be to the hypothetical “true result” – the result you would find if it were possible to measure an infinite number of divers.  This is how the possible difference between the results of a study and a “true result” might be shown on a graph.

The dot or circle indicates the actual result of the study  – in this case, some of the results from DAN’s Project Dive Exploration – while the lines sticking out from the top and bottom of each dot or circle show the range within which the “true result” is probably located.   So, while the circles are, in each case, your best guess at the “true result”, you’re 95% certain that the “true result” lies somewhere in between the top and bottom of the two lines sticking out from each circle.  The results for AIR are based on approximately103,000 single dives while the results for NITROX are based on approximately 7,000 single dives.  In general, the larger the sample size, the closer it is likely to be to the “true result” (and the shorter the lines sticking out from the circle are likely to be).

There are other ways of talking about this same issue of a sample measurement versus a “true result”.    When public opinion polls show their results, they will usually contain a statement something like: “These results are accurate to within 3 percentage points, nine times out of ten.”

The fact that the results of a study are only an estimate of some “true result” is an issue that applies to all kinds of data.

turtle2

Take heart! We’re getting closer now.

Notice that, in our earlier example, we were talking about only one set of conditions where depth, bottom time, the rate of descent, the rate of ascent, and the breathing mixture were each specified.  So our data refers directly only to dives under that exact same set of conditions – almost as if it were a single question on a public opinion poll.  If you were to change any one or more of those conditions – 60 feet instead of 80, 40 minutes instead of 30, etc. – or if you were to add an additional condition – say, making the dive multi-level, or adding a safety stop – each and every change would be like adding a completely new question to the opinion poll.   (Making for a completely unmanageable opinion poll, of course.)

While the Navy data contains a large number of sets of different specified conditions, it would have to be infinitely large to cover all possible sets of specified conditions.

There are two points that should be sandwiched in here before we go on (in the next posting) to deal more specifically with data from living things, particularly people.

The first is a comment on the nature of diving data.  As you can see from the above, diving data is inherently probabilistic to begin with.  That is, you have the number of divers in a particular set of conditions, and you have the number who got “bent”, which is easily expressed as a percentage of the number of divers, and which amounts to an estimate of the probability of getting bent under those particular conditions.   Why do I bother emphasizing this rather obvious conclusion?  (And no, it’s not primarily because SAUL is a probability-based model.)

It seems that a substantial number of divers may not be aware, or may not totally accept the fact that, if you dive there is always some probability, however minute, that you will get bent.   Anecdotally, I have heard from a number of diving doctors about patients who resist a diagnosis of decompression illness, protesting that they can’t possibly be bent, as they’ve never exceeded the tables or their dive computers.  But the truth is, unless you avoid diving altogether (or, possibly, if, while diving, you never exceed a depth in the general neighborhood of 15 feet – a feat which may be even more difficult than avoiding diving altogether) there is always some non-zero probability of getting bent.  True, for most recreational diving, that probability is very small.  On the other hand, the probability of winning a major lottery prize is even smaller.  Yet, for each such prize, there is at least one winner.

The other point we need to mention here is this:  Even though data from each separate set of diving conditions must be looked at as a completely separate question, we know, both intuitively and logically, that,  inevitably, there must be some relationship or connection between these disparate “questions”.  And that, of course is where algorithms, decompression tables, etc. come in.   But before we get to those, we need to deal with the particular problems that arise when your data comes from living beings, particularly people.

So my next post will start with that issue and go on from there.  And, while I can’t say exactly when that next post will appear, I promise it won’t be as long delayed as this one was.

 

What’s Happening?

Those of you who have read “Coming Soon to a Dive Computer Near You” may have noticed that “Soon” has been a while in coming.  But we are now a lot closer to getting SAUL into a dive computer.  Liquivision is collaborating with us to produce a dedicated SAUL dive computer and I am hard at work adapting programs for that purpose.  We don’t have a projected release date yet so, obviously, you won’t find it under your Christmas tree this year.  As for next year, who knows?

I came across a video (on Vimeo.com) that was taken of my presentation to the International Congress of Hyperbaric Medicine last year in South Africa and posted a link to that on my Articles page.  You may need to turn up the sound to hear it properly.  I think I need to work on speaking a bit louder (or closer to the microphone).

The Articles page now has the “To Stop Or Not to Stop… And Why” article from Diver magazine.   I will be also be posting my original version of that article, with a minor update to it, because: a) I generally preferred my wording (rightly or wrongly), where a few editorial changes were made, b) I wanted to post the second cartoon that was submitted with the article (but not published) , and c) this updated version was accepted for publication by the European editon of  Alert Diver.

Now that SAUL is getting a little closer to coming out in a computer, it’s probably time to pull together the the various ways in which SAUL’s validity can be demonstrated.  I’ll try to do this in a series of relatively short pieces – not necessarily consecutive - on the blog.  Once a few have been done, I’ll put them, and subsequent ones, together for easy reference under a new heading.

 

The Doctor Is In III

Q:  This question was posted by Craig in comments to The Doctor is In part II

An interesting discussion has come up on ScubaBoard. What are the major differences between your model and the 4 compartment serial model used by DCIEM?

A:

The answer to this question may be of interest to readers in general, but not always to the same degree.  So, what I’ve done is printed both a reasonably complete answer (in black type),  with each paragraph preceded by a quick-and-easy overview (in red type).

There are three main differences between the DCIEM interconnected model and SAUL.  (1) the way the compartments are connected, (2) the calculation of how the inert gases move between compartments, and (3) which compartment(s) are risk-bearing.

Here is the more detailed description of the main differences between the DCIEM interconnected model and mine. There are three major differences : (1) the geometric arrangement of the tissues and circulating blood, (2)  the “order” of the diffusion kinetics that is presumed, and  (3) whether one or more than one compartment carries explicit risk. Let’s consider these one at a time.

 

(1) The DCIEM model has 4 compartments connected in series – i.e., almost like a train or subway – you (or the gas) can move from the first to the second to the third, and so on, but not directly from the first to the third.  In SAUL, the arrangement of compartments is more like a wheel, with the risk bearing compartment as the hub and gas moving between the hub and the spokes in either direction.  This type of model was first described 70 years ago by Morales and Smith who concluded that it was better than other arrangements of compartments in illustrating how gases or other substances moved in tissues.

 

(1)   The compartmental geometry I chose to use – wherein the compartments are interconnected in parallel – originated almost 70 years with the work of Morales and Smith, two US Navy modellers. The work is cited in refs 25-27, 31, 32 in my J APPL PHYSIOL paper, with ref 26 being most relevant. These modellers examined a number of different compartmental arrangements, and concluded that on balance, their so-called “competitive parallel arrangement” (not to be confused with parallel models, like Haldane, which are not competitive) best captured tissue perfusion. Their competitive parallel arrangement is the geometry illustrated in Fig 1B of my J APPL PHYSIOL paper. They did not consider DCS active/DCS – inactive issues. They used the term “competitive” to reflect the fact that all the tissues “compete” for the circulating dissolved gases.  The DCIEM model involves 4 compartments that are connected in series, which is clearly a different arrangement from what Morales and Smith, and I used.

 

(2)  SAUL uses the same method of calculating the rate at which dissolved gas can flow between compartments as is used by essentially all decompression work I know of, other than the DCIEM model.  The DCIEM model adds in an additional term which I would only expect to see in circumstances where the dissolved concentration of gas was at least 10 times greater than would ever occur in diving.   I have never seen a satisfactory explanation of their use of this term.

(2)   I used 1st order kinetics to describe the rate of dissolved inert gas flow between compartments, while the DCIEM model included a quadratic contribution to the kinetics. All decompression work that I am aware of (Haldane, US Navy probabilistic models, etc.) other than the DCIEM model, presumes 1st order kinetics for dissolved inert gas exchange. From the perspective of basic physical chemistry, 1st order kinetics, whereby the rate of dissolved gas diffusion out of a compartment is proportional to the 1st power of the dissolved inert gas concentration in that compartment (e.g. see Eq. A2 in my Appendix A), is essentially always sufficient to capture the underlying kinetics, except for concentrations that are extremely large. I wouldn’t expect a quadratic contribution to kick in until the dissolved concentration became at least an order-of magnitude greater than is encountered in decompression problems. I never understood the DCIEM inclusion of a quadratic contribution to the kinetics, and I have never seen it satisfactorily explained.

 

(3)  In SAUL the risk is carried entirely by the “hub” or central compartment.  The other compartments do not carry risk but they affect the risk in the “hub” by receiving dissolved gas from the “hub” (which lessens the risk there) or sending dissolved gas to the “hub” (which increases the risk there).  In  the main DCIEM model all the compartments carry risk.  I have found that, by keeping the risk only in one compartment SAUL is better able to predict decompression stress than other models.  In comparing different models’ ability to predict DCS on direct ascents from saturation, SAUL was the only one that correctly reproduced the shape that best fits the data.  

(3)   In my models the risk is carried entirely by the relatively well-perfused central compartment, while the peripheral compartments are not themselves explicitly risk-bearing. They influence risk indirectly by acting as sources or sinks (depending on conditions) of dissolved inert gas, relative to the central risk-bearing compartment. In the main DCIEM model, i.e. the one developed later into a probabilistic model [as described in P. Tikuisis, R.Y. Nishi, and P.K. Weathersby, Undersea Biomedical Research, 15(4), 301-313 (1988)] all the compartments bear risk. I have found that if more than one compartment in a parallel interconnected model is allowed to be explicitly risk-bearing, the predictive capability of the model deteriorates. Specifically have a look at FIG 2 in my J APPL PHYSIOL paper. This illustrates the P(DCS) predictions of a number of different models for the simplest profile that exists – i.e. a direct ascent from saturation. While the parallel interconnected models shown both have a sigmoidal shape that fits the data, the other models uniformly fail to properly predict the observed results. They do not swing up fast enough with increasing saturation depth. Model failure here is serious, because, as previously indicated, this is the simplest possible profile that can exist.  I have found that if an explicit risk is put into more than one compartment, the interconnected models deteriorate, i.e. they lose their sigmoidal shape, reduce to a quasi-linear form, and fail to reproduce the observed data.

 

 

THE DOCTOR IS IN Part II

Q:  (The question(s) in this case were first submitted by Craig and published as comments following part I of The Doctor Is In.)

Hi Saul,

I find your model innovative, intriguing, and a refreshing departure from traditional thinking.

What are the no decompression times for air and 32% nitox so that I can compare it to existing algorithms? I dive the Pelagic Pressure Systems DSAT algorithm on an Oceanic dive computer.

What does your model say about ascent rates?

What does your model say about the duration of the traditional safety stop at 15 FSM? Seems like it may vary dependent on previous exposure.

What does your model say about stops deeper than the safety stop for no decompression diving, i.e. deep stops?

What does your model say about handling short decompression dives as may be encountered by the recreational diver, i.e. X minutes at 15 feet? Is an additional “safety stop” advantageous?

Good diving,

Craig

A:   No Decompression times, or NDLs, in the old sense don’t exist anymore, because their values will depend on the diver’s inputted maximum tolerated risk.  For that reason, I don’t have tables I can give you for comparison.

There are comparisons between different models in my 2007 J. Appl. Physiol. paper on this website, particularly Table 4.  (The model described as 3CM is S.A.U.L.)

There are also some general comparisons in an article written with my wife “Coming Soon to a Dive Computer Near You” that was published in the European (and several other) edition(s) of Alert Diver, and is also available on this website.

The comparisons in Table 4 of the J. Appl. Physiol. paper were done with all models calibrated against very high risk data.  So for a square profile dive on air to 100 fsw for a BT of 25 min. and a direct ascent – i.e., no stop – 3CM (S.A.U.L.) would predict a “hit” rate of 7%.  Recalibrating the model using recreational profiles in the dataset takes this down to 3.4%.  Adding a safety stop at 15 fsw for 3 min drops the expected hit rate to 0.9%.  (For anyone who is a little unclear on the meaning or purpose of calibration, there’s a pretty good explanation in the “Coming Soon..” article.)

For obvious reasons, all algorithms consider a dive on nitrox safer than the same dive on air, resulting in (for other models) longer no-decompression times or (for S.A.U.L.) lower predicted risks.  However, the difference between air and nitrox is more pronounced for S.A.U.L than for other models, making nitrox, according to S.A.U.L, safer than is currently believed.

For ascents, slower is better within the ascent range 100 fsw/min to 30 fsw/min.  Ascent rates less than 30 fsw are very difficult for a free-swimming diver and, beyond a point, an extremely slow ascent would begin to resemble a deep stop, which is not effective (See below).

The traditional 15 fsw stop depth for 3 min stop time is pretty much optimal for dives in the recreational no-decompression (low-risk) regime.  But the differences between doing the stop at 15 fsw vs. 10 fsw or 20 fsw (for 3 min, whichever depth), are small.  Also, as you infer, the optimal stop depth (given that you are doing a single stop) gets deeper as the maximum depth of the dive gets deeper.

Deep stops are not useful for reducing the probability of DCS, and can actually increase the risk of DCS if not accompanied by extra stop time at a shallow depth.  This is consistent with three navy studies on the effect of deep stops on DCS rates for decompression air dives (one U.S., one French, one Swedish).  A stop in the 10 – 20 fsw range is more efficient at safely off-gassing before surfacing.

With respect to short decompression dives, the model is totally consistent with the U.S. Navy protocols for decompression diving (See, eg. the U.S Navy Decompression tables for air, available from the internet, if you don’t already have them).  That is, the shortest stop time is spent at the deepest required stop depth, more time at the second-to-deepest stop…, and the most time at the shallowest stop depth.

The Doctor Is In

This is an ongoing feature in which Dr. Saul Goldman answers  questions about S.A.U.L.,  its characteristics, and how it came about.

 

Q:  What’s the point of your new decompression model?  Aren’t there enough models out there already?

 

A:  I’ll deal with the second part of your question first.  In a certain sense, there’s really only one model “out there” right now.   Virtually all current decompression models – other than mine – conform to the Haldane model in their structure.  Each of  them involves some variation, whether minor or major, grafted onto the original Haldane structure.  That doesn’t quite answer the question, of course.  Whether you call it one model or several models, number isn’t really the issue.   Rather, the issue is the quality and capability of the model or models.  There are “enough” models out there (whether it’s one or a dozen) when they are capable of dealing with the issue of decompression in a satisfactory manner.   In my opinion, this is not the case, so I would have to say that, no, there are not enough models out there already.

This, then, becomes the answer to the first part of your question.  The point of my new decompression model is that present models are insufficient and my model is much more accurate.  I was unhappy with existing models, both as a diver and as a scientist, and this led me to develop a better one.

 

Q:  You say yours is a better model.  But every model out there was developed by someone who thought – and likely still thinks – that theirs is the best.   Why should we take your word over theirs?

 

A:  It’s not really a matter of taking my word over theirs, or vice versa.  What it comes down to are the facts and the evidence in support.   You need enough information to reach your own conclusions on which model is best.  That’s why I have posted here the full text of a few of my published scientific journal articles: one directly on my model, and three on a related topic – the fundamental characteristics and behaviour of gas bubbles in fluids and tissue-like substances.   If you’re not in the mood for serious study, I’ve also posted two published magazine articles, written together with my wife, that take a lighter approach to my model and some of its characteristics.  Frankly, I’m confident that anyone who takes the time to look at the facts carefully, will come to the conclusion that mine is the best model.

 

Q:   Three of the posted articles are on bubbles.  Does that mean that your model is a bubble-based model?

 

A:   No.  Surprisingly, despite my expertise on bubbles – or, more likely, because of it – my model is not bubble-based.    I do have a version of my model that incorporates bubbles, but the difference between the two versions is relatively minor.

Upcoming Conference

This month I will be attending the South Pacific Undersea Medicine Society’s (SPUMS) 41st Annual Scientific Meeting, held from May 20-26, 2012, in Madang, Papua New Guinea as an invited speaker.

At this conference, I will be giving two talks.  The first is on Wednesday, May 23, titled “An Improved New Class of Interconnected Decompression Models”, which will explain the S.A.U.L. model.  The second talk is on Friday, May 25 and is titled “Some Issues in Decompression: Deep Stops and Types I/II vs. Type III DCS”