Algorithms and Dive Computers

I’m still dealing with the question of : If different dive tables, algorithms, and dive computers all stem from essentially the same data, why are they so different?  When I left the previous post I said this one would start with the particular problems that arise when your data comes from living beings, and it will.  (I also promised this post wouldn’t be as long delayed as the previous one.  I’ve kept that promise – but just barely.  I’ll try to do better with the next post.)

The basic idea in doing studies is to have complete control over absolutely everything in the surrounding situation, so that the only differences in the outcomes will be as a direct result of the changes that you make in the situation.   (Good luck with that!)   That level of control is difficult enough with inanimate objects (coins, dice, widgets, or whatever).  Things that are alive are a large step beyond that.  There is always something else going on that you can’t control – often several things – whether or not you are directly aware of them.   If you’re lucky, these other things will have little or no effect on the outcomes.

When your studies involve people, things get even more complicated.  Not only are people less controllable – you can’t keep them under 24-hour supervision, regulate diet, select breeding stock, etc., as you can with lab rats – there are additional restrictions on acceptable outcomes.   An obvious example:  you can’t do a decompression study where 80% of the divers get the bends.   Even if your own moral compass wouldn’t preclude that, an ethics committee will.    While such restrictions are valid and necessary, they do tend to constrain the range of data that can be collected.   I’ll get back to this point a bit later (or in the next post).

Right now, let’s get back to the Navy data discussed in the previous post.   What use can be made of all that data?   The most pragmatic use is as a simple guide to the underwater workplace, both naval and commercial.   Dives that had resulted in few or no cases of DCS were deemed safe for use; dives with unacceptably high rates of DCS were deemed unsafe.   Out of this, the first Navy dive tables were born.  All the dives were essentially square profile. (I say “essentially” because, of course, dives requiring decompression were only square profile up to a point –  that point at which the first decompression stop began.)    The tables were constructed almost directly from the data. An algorithm was used mainly as a sort of “smoothing device” to keep the tables internally consistent and to fill in any points for which there was no data.

This “smoothing” is necessary because, as discussed in the previous post, the result of a study or experiment should be looked at as a range of possibilities rather than as an absolute answer, with the actual number found being merely the best estimate, in the absence of other information.   Here’s a clear example (not actual, but possible) of how other information could change your best estimate of a study’s results:   Suppose that, in a dive at a particular depth for 20 minutes, 4 out of 100 divers got bent.  Four percent would be your initial best estimate of DCS probability for that dive.   But suppose that, in another dive at that same depth, this time for 22 minutes, only 3 out of 100 divers got bent.  You would not accept 4% DCS as your best estimate for 20 minutes at that depth, and 3% DCS as your best estimate for 22 minutes at the same depth.   You would, in some fashion need to adjust your best estimates so that they made sense together.   The “smoothing” done by the algorithm is an adjustment of best estimates in the tables so that they all make sense together.

Take a break for a brief ”power nap”.

 

mergansersonlog3light

fruitbat                  puffer3

 

 

 

Okay, break’s over; let’s get back to work.

Even before we get to dive computers, and to dives that are more varied than those in the dive tables, we begin to see some of the reasons for different results coming from the same data.  One biggie: How much risk is an acceptable risk?  Is a 4% chance of DCS okay?  2%?  Less than 1%?   The other obvious source of difference is exactly how you adjust a large number of initial best estimates so that they make sense together.

In our fictional example above, someone might end up with new “best estimates” of 3.2% for 20 minutes and 3.4%  for 22 minutes while someone else might have it as 3.4% for 20 minutes and 3.8% for 22 minutes.  (Of course, if a 2% chance of DCS was the maximum acceptable, neither 20 nor 22 minutes at that depth would be permitted.   If 3.5% risk of DCS was used as the acceptable limit, one table might allow both dives while another would allow the 20 minute dive but not the 22 minute one.  If no adjustment had been done to the original best estimates, a table allowing a 3.5% maximum risk would allow the dive for 22 minutes, but not the one for 20! ).

These two choices – maximum acceptable chance of DCS and method of adjusting or “smoothing” the results – are the primary reasons for the real life differences you can see between the U.S. Navy Tables, the DCIEM tables, and the various other tables that exist.

For most commercial diving applications, dive tables, with their essentially “square profile” approach, were reasonably appropriate.  Divers would descend to the work site and remain at that depth until time to ascend.   As recreational diving increased in popularity, tables that assumed that a diver remained at a single bottom depth until ascending became awkward to use.   That was frequently not how recreational divers wanted to dive.

When I took my initial certification course, before diving computers were in use, we were told to use the tables in accordance with the deepest depth we descended to – even if we only stayed at that depth for a moment or two.   While it seems way too  conservative, without a known safe alternative, that was (and remains) the best practice.  At least it was simple.  The method for calculating times for repetitive dives from the tables (which you probably learned, but may have since banished from memory) involved turning over the tables after the first dive, categorizing yourself as A,B,C, etc., based on the nature of that first dive, the time elapsed since your ascent (changing, of course, as more time elapsed), then turning back to the tables, using your category to adjust the tables to get the allowable parameters for your next dive.  After a while, PADI developed a dive wheel that simplified the repetitive dive calculation somewhat.   Still, dive computers, when they finally arrived, were very welcome indeed.

The most significant change that came with dive computers – besides simplifying what the diver had to do – was the ability to continuously incorporate information from the depth gauge and the computer’s clock into deco calculations.  That made it possible to calculate multilevel dives differently from the “deepest depth” method.

But now the algorithms had a more complex job to do.  The nature of that complexity and how it’s dealt with will be covered in the next blog.

 


Comments

Algorithms and Dive Computers — 1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha Captcha Reload

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>