Link to my review of Kubo

See that on my other site: 100filmsin100days

Leave a comment

Posted by on August 24, 2016 in Uncategorized


Tags: ,

Understanding Exponential Decay



{              Special Intro Note:

OK, this post has been a pain to transfer from my original writing into WordPress. A lot of the complication originated from the use of super- and sub-scripts in the mathematics, it’s also meant that I got pretty irritated in reworking it so many times that now I haven’t gone back to ensure that the math is correct anymore.

I’m saying all this because I want to say, “don’t take my word for anything here if you believe I’ve made a mistake, let me know and I’ll check it out when I have less frustrated eyes.”}



Exponential decay is a concept integral to determining the age of natural substances when initial and final concentrations are known for a specific unstable radionuclide. There are largely stable ratios of individual elements and their isotopes in the environment. Some of these isotopes are stable, and some are not. Unstable isotopes will break down over time. Although one could never predict how long any given atom will exist before it breaks down, the extraordinary number of atoms in any sample and the predictability of their decay into new atoms allows us to compute how long a substance has existed since new atoms were being incorporated (i.e. the time of death). By looking at these ratios over time, we can make very accurate measures of the age of a sample.

Screen Shot 2016-08-12 at 10.56.05 PM.png

Different decay rates of various isotopes provide an array of measuring sticks for us to use.

Some examples:

32P has a short half-life of 14.29 days and therefore has to be made synthetically for lab use.

35S is formed from cosmic ray spallation of 40Ar in the atmosphere. It has a half-life of 87 days.

14C is formed cosmogenically by the reaction 14N + 1n → 14C + 1H. It has a half-life of 5,730 +/- 40 years.

40K It has a half-life of 1.248×109 years.

Some elements have a fairly straightforward decay path. For instance, Cosmic rays (high-energy protons and atomic nuclei) from outside of the solar system bombard the atmosphere striking atoms. When this occurs, the atoms are fractured into some Helium, some protons, and some neutrons. When these neutrons strike nitrogen 14N it displaces a proton and is converted to 14C. 14C then decays back to 14N by one neutron breaking down into a proton and electron which is emitted from the atom (beta decay) with a half-life of 5730 years (note that during formation, a neutron takes the place of a proton. Then in decay, the neutron breaks down resulting in a proton, which stays and the emission is an electron, which is why the atomic mass does not change from 14 in either direction).

n + 14N –> 14C + p+

14C –> 14N + e

Speaking Mathematically…

Exponential decay occurs in a general exponential function

Screen Shot 2016-08-12 at 11.03.16 PM.pngIn other words, as x increases, f(x) decreases and approaches zero. This is exactly the type of relation we want to describe half-life. In this case, we want a = ½, so that we have the relationship

Screen Shot 2016-08-12 at 11.02.24 PM.png

Rewrite in terms of half-life. Of course, our function does not depend on generic variable x, but time, t.

Screen Shot 2016-08-12 at 11.02.27 PM.png

    • Simply replacing the variable doesn’t tell us everything, though. We still have to account for the actual half-life, which is, for our purposes, a constant.
    • We could then add the half-life t1/2 into the exponent, but we need to be careful about how we do this. Another property of exponential functions in physics is that the exponent must be dimensionless. Since we know that the amount of substance depends on time, we must then divide by the half-life, which is measured in units of time as well, to obtain a dimensionless quantity.
    • Doing so also implies that t1/2 and t be measured in the same units as well. As such, we obtain the function below.

Screen Shot 2016-08-17 at 8.08.42 PM.png

Incorporate initial amount. Of course, our function f(t)f(t){\displaystyle f(t)} as it stands is only a relative function that measures the amount of substance left after a given time as a percentage of the initial amount. All we need to do is to add the initial quantity N0.{\displaystyle N_{0}.} N0. Now, we have the formula for the half-life of a substance.

Screen Shot 2016-08-17 at 8.10.43 PM.png

Solve for half-life. In principle, the above formula describes all the variables we need. But suppose we encountered an unknown radioactive substance. It is easy to directly measure the mass before and after an elapsed time, but not its half-life. So let’s express half-life in terms of the other measured (known) variables. Nothing new is being expressed by doing this; rather, it is a matter of convenience. Below, we walk through the process one step at a time.

Divide both sides by initial amount N0.

    • Take the logarithm, base 1/2 of both sides. This brings down the exponent.

Screen Shot 2016-08-17 at 8.13.53 PM.png

    • Multiply both sides by t1/2 and divide both sides by the entire left side to solve for half-life. Since there are logarithms in the final expression, you’ll probably need a calculator to solve half-life problems.



Example Problems

  1. If you start with a sample of 600 radioactive nuclei, how many would remain un-decayed after 3 half lives?
  1. What is meant by ‘decay constant’?



  1. Warm-up Problem. You receive a shipment of 32P in the lab on the first of the month. When it arrives, you perform an experiment using 10mL of this reagent. 57 days later, you wish to repeat this experiment using the same amount of radioactive P. About how many mL of your stock will you use (Don’t calculate this using the equations above, just work it out logically for an approximate answer)?




Worked Examples

  1. 300 g of an unknown radioactive substance decays to 112 g after 180 seconds. What is the half life of this substance?
    • Solution: we know the initial amount N0=300 g, final amount N=112 g,  and elapsed time t=180 s.
    • Recall the half-life formula

t1/2 = t / log1/2(N(t)/N0)

    • Half-life is already isolated, so simply substitute and evaluate.

t1/2=180s log1/2(112g / 300g)

= 127s

    • Check to see if the solution makes sense. Since 112 g is less than half of 300 g, at least one half-life must have elapsed. Our answer checks out.



2. A nuclear reactor produces 20 kg of uranium-232. If the half-life of uranium-232 is about 70 years, how long will it take to decay to 0.1 kg?

    • Solution: We know the initial amount N0=20 kg,
    • Rewrite the half-life formula to solve for time.


    • Substitute and evaluate.

t=(70 years)log1/2(0.1 kg20 kg)≈535 years


    • Remember to check your solution intuitively to see if it makes sense.



Of Note:

In sourcing this material (especially the maths), I came across something I did not expect. That is… a very good article explaining C14 dating in Answers In Genesis, a source that typically does not curate science in a remotely responsible manner. However, the article linked above does an excellent job in describing the steady-state production of C14 in the atmosphere and the process by which it is used to date carbon-containing remains.

In the end, Answers in Genesis quickly departs to an attack on a straw man, suggesting that only Carbon is used for dating the Earth, but this is (willfully?) mistaken in several ways.

“[B]ecause the half-life of carbon-14 is just 5,730 years, radiocarbon dating of materials containing carbon yields dates of only thousands of years, not the dates over millions of years that conflict with the framework of earth history provided by the Bible, God’s eyewitness account of history. ” (my emphasis)

First, C14 is only used to date materials in which Carbon was incorporated (e.g. organisms) during life – i.e., it is not the way non-living material is dated. Second, C14’s 5730-year half-life allows dating of materials to approximately 40,000 years, at which point there is so little C14 remaining, that this method’s accuracy is reached. Further, at such low levels, background contamination from other sources (e.g. bacteria) compromises accuracy.

I think if I were to use an analogous argument: ‘[memory] yields dates of only [dozens] of years, not the dates over [thousands] of years that conflict with [my notion of the universe beginning with my birth].’ one might see the fallacy.





>Taken in part from:





Leave a comment

Posted by on August 17, 2016 in Uncategorized


Flow Cytometry Basics

Definition: Flow cytometry is a technique allowing for the examination of large numbers of single cells at high speed. The principle involved that cells can be passed within a sheath of solvent so that they pass a laser as individual units. The laser is employed to capture a scatter profile of the cells that gives information about the size and internal complexity of the cells and may also excite fluorescent probes that identify specific surface or internal structures.

Screen Shot 2016-07-10 at 10.29.53 PM


Cytometers record information about each individual cell across a number of characteristics. To accomplish this, cells are titrated to run at a speed (measured by cells/second – often around 10,000 cells/sec) that is within the capacity of the machine to read. Physically, a constant flow of ‘sheath fluid’ is run across the detector’s path. The cell suspension runs as a separate stream within the sheath fluid.

 Data points can be represented very clearly as values for each characteristic measured, and may be listed as a series of numbers as the table below. Here, cells were ‘labeled’ with antibodies against three known proteins, Btk, CD3, and CD19. Each antibody also carried fluorophores that emit known wavelengths of light when excited by (a) laser(s) of specific wavelength(s). These antibodies are illustrated in the figure to the side. Each type of antibody binds to a specific ‘antigen’ and carries several fluorophores that have been chemically linked to them Screen Shot 2016-07-10 at 10.30.01 PM(illustrated by different colored stars). Alternatively, secondary reagents can be used to bind to the primary antibodies to allow more freedom of color choice or to amplify weaker signals.

Cells are labeled or ‘stained’ with these antibodies by incubating cells with the antibodies for a period of time, followed by washes to remove excess, unbound antibodies. Typically, all stains can be done together in a single incubation unless secondary antibodies are employed to amplify weak signals or adjust the colors used or intracellular staining is required (see below).

1 412 183 41 6 58
2 374 192 4 745 9
3 299 201 3 4 8

If we measured data from each of the three cells above, this might be sufficient to illustrate the identity of each cell type without further analysis. However, if thousands of cells are measured for each condition in an experiment (done in triplicate), tables of numbers lose their value as effective illustrations of the data.

To account for this, scatter plots or density plots (similar to topographical maps) are regularly used to illustrate these larger datasets. Because it is only practical to present values in two dimensions at a time, plots are often drawn such that a population is identified in one plot and then those ‘gated’ cells are then redrawn in subsequent plots to illustrate values in new categories. Cells may also be examined for just one characteristic using a histogram.

Forward Scatter (FSC) and Side Scatter (SSC)

FSC and SSC are (very often) the primary measures of the physical properties of cells as they pass through the cytometer’s laser. FSC provides information about the size of the cell, while SSC provides information about the internal complexity of the cell. These data are presented for a sample dataset of white blood cells below. The more numerous Red Blood Cells (RBCs) and platelets have been eliminated prior to analysis.

Screen Shot 2016-07-10 at 10.30.33 PMThe cells illustrated in the FSC / SSC plot above fall into identifiable subsets of white blood cells based on their size and complexity. The gated cells are known as lymphocytes, which includes both B and T Cells. Gating is a way of selecting a group of cells to analyze further.

Screen Shot 2016-07-10 at 10.30.41 PMFluorescence

Here, the lymphocyte population is now distinguished by the presence of identifying surface proteins, CD19 (found on B Cells) and CD3 (found on T Cells). By plotting the fluorescence emitted by antibodies to these receptors, we can not only identify the two major populations but gate each of them for further analysis for another protein, the intercellular protein kinase, Btk.

Looking at the Btk expression requires a slightly different technique because this protein is located inside the cell. For antibodies to access to Btk, we have to punch holes in the cell that let antibodies permeate cells. This is done chemically after all surface staining is complete and cells are ‘fixed.’ Otherwise, the protocol is very similar to surface binding.

Screen Shot 2016-07-10 at 10.30.49 PMIn the last panel, both B Cells and T Cells (individually identified previously) are assessed for the presence of Btk and the results are represented as the number of events (cells) exhibiting high or low expression (illustrated below).

Here we can see that the B Cells express uniformly high levels of Btk, while T Cells express little or none. It would also be possible for us to see if only a subpopulation of either B or T cells expressed the kinase. In that case, we could gate expressers vs non-expressers to see if there are any other indications that these cells are different such as cell size or expression levels of the other receptors (CD19 or CD3).

It is possible to use staining to examine other features of the cells as well. For instance, if a treatment of cells might result in cell division, this can be tracked by using a non-toxic dye which is added to cells prior to treatment and then assessed afterward (typically 3-5 days). Because the dye is added only once, cells that divide will each take only half the quantity of the original dye. It is possible to distinguish up to 4-5 divisions clearly.

Screen Shot 2016-07-10 at 10.30.56 PM

CellTrace is as a ThermoFisher product, this graphic was taken from the product literature

Data from these proliferation assays is often viewed in histograms to see the proportion of cells at each division, or with another label to see if the dividing cells up- or down-regulate certain receptors. It is also common to use a vitality dye that would demonstrate if cells that don’t divide die, vice versa, or exhibit some other pattern.  The cells illustrated below are CD4 T Cells that were induced to divide by a ‘mitogen,’ possibly IL-2. The histograms depict cells in each generation, where the generation farthest to the right is the parent generation (i.e. undivided cells – this would be confirmed by a control population grown without mitogen). The next peak to the left represents cells that have divided once, the next represents cells that have divided twice, and so on. In quantitating the number of cells that have divided, it is important to consider that ONE parental cell is responsible for TWO cells that have divided once or FOUR cells that have divided twice, etc. (Note that the CellTrace dye is plotted along a log axis)

Screen Shot 2016-07-10 at 10.31.03 PM

Also from the CellTrace product literature

The Scatter plot illustrates the same cells, also plotted by cell division on the X- axis, however, this time the Y-axis separates cells according to their expression of CD4. These data show that the most actively divided cells are divided between CD4 expressors and non-expressers. We can also see that CD4 expression spikes in expressors upon division.



Fixing – chemically attaching antibodies to their targets in a reaction that kills the cells. This is required for longer term storage of cells and if further processes such as intracellular staining must be done.

Gating – Drawing a limit around a group of cells or an area where cells might appear for further analysis and/ or quantitation. Gating will always result in a calculation of the percentage of the total cells that are included within the gate

Labeling / Staining – to add fluorescent reagents to cells that will bind to specific elements.

Mitogen – a substance that induced cell division.

Washing – to repeatedly add a solvent to cells, spin to pellet the cells and remove unbound materials with the solvent.

Leave a comment

Posted by on July 10, 2016 in Uncategorized


Tags: , , , , ,

Phlogiston, bloodletting, and the four humors

Phlogiston – You know, the stuff that’s in stuff. The burny stuff that’s released by fire?Screen Shot 2016-06-09 at 5.20.34 PM.png

Not familiar? Well, that’s because it’s isn’t a thing at all – anymore.

Screen Shot 2016-06-14 at 10.04.59 PMGeorg Ernst Stahl (1659–1734) lived in a complicated time for science. It was just being brought out of the dark ages in many ways and much of what he studied sounds completely foreign and backward to modern ears.

Primarily, Stahl studied the distinction between living and dead material. This vital force was supposedly the anima, or spirit, of a living thing, that gives it ‘agency.’ This was the same force, known as vitalism, that even Louis Pasteur believed was necessary for enzymatic reactions to proceed. Pasteur wasn’t wrong about much, but this one time he fell victim to the prevailing zeitgeist.

Stahl also proposed, in his De motu tonico vitali, that there was a ‘tonic motion’ in things that needed to be permitted for proper circulation of blood. When inflammation or other obstructions occurred, the problem was that this tonic motion was being blocked. One cure for these obstructions was the practice of bloodletting, which addressed the most easily managed of the four humors and was used to treat just about everything.

Although this may sound like a criticism of  Stahl, he was highly regarded as a professor and physician in his time and his work was critical in that it added an experimental element to scientific work. As a testimony to his reputation, he served as physician to both Duke Johann Ernst of Sachsen-Weimar and King Freiderich Wilhelm I of Prussia.

To get to the point here, he proposed the existance of a substance, Phlogiston, that was a component of many things that was released when that thing was burnt. Phlogiston was colorless, odorless, and weightless and it spoke to the question why something, once burnt, could not be burnt again. Ash, for example, was completely deflogistated matter. It contained no more phlogiston and was therefore impervious to further burning.

Additionally, air could fill with phlogiston, becoming saturated. When this happened, the principle of diffusion Screen Shot 2016-06-10 at 4.26.42 PMwould kick in to prevent further diffusion of phlogiston out of a substance. Recall that the basic principle of diffusion is that substances go from regions of high concentration to regions of low concentration (Actually, the random movement of particles will continue unendingly. The apparent result of this movement is that a non-random, concentrated source of particles becomes a random distribution that is effectively uniform. Actually, the particles are still moving, but the random distribution appears stable).

It sets up a simple equation for combustion of any (flamable) thing like this:

Phlogiston(s) + heat + something else –> Phlogiston(g) + ash + energy

Actually, it’s a great hypothesis. It does a servicable job in predicting the behavior of a combustible material in a simple system.  Imagine that phlogiston = carbon. This phlogiston / carbon exists in different forms around us: a waxy hydrocarbon chain in the candle, CO2 in the air, and as the backbone of sugars. However, it fails to recognize a couple of important things too: Mass doesn’t just disappear, the CO2 does have mass, of course, but it’s harder to appreciate. Also, flames don’t necessarily go out because of too much CO2 in the surrounding air, but because of a lack of something else, Oxygen.

However, it does fail to recognize a couple of important aspects. First, mass doesn’t just disappear during combustion. What remains as ash is lighter than the starting material.  CO2 is released and despit that fact that it is harder to appreciate, it does have mass. Second, flames don’t necessarily go out because of too much CO2 in the surrounding air, but because of a lack of something else.

preistly making o2It was by following in Stahl’s footsteps that Joseph Priestley discovered oxygen. Priestley had a knack for studying gasses. He was good at capturing and manipulating them in a controlled way. The figure to the left is an apparatus  of a type common to Priestly’s work, where a substance is heated (e.g., KClO3) to boil off a gas (e.g. O2) in a way that the gas displaces water in an inverted flask so that it may be captured in pure form.

Priestley found that oxygen purified in this way could refresh deflogistated (-perhaps, phlogistated?)26844_lg air allowing it to support combustion once more. It could also rescue an animal from suffocating in a bell jar (something that Preistley did enough that is sounds almost like a hobby of his.) The idea that air was composed of numerous components was a new one, and already Preistley was purifying these substances and demonstrating their requirement for life and for chemical reactions.

So, how does this change the way we needed to think about phlogiston?

It explains that mass doesn’t just disappear when burnt. It goes somewhere, it becomes something else (CO2). It changes the requirement for combustion from one considering the diffusion of matter out of one thing and into the air into a chemical conversion of something into something else.

Instead of the Phlogiston equation, we have the combustion reaction (either proceeding until completion or not):

Screen Shot 2016-06-12 at 8.51.00 PM

Phlogiston might still fit in as carbon if we are insistant, but now we see that something else is required as well: Oxygen.

Flames don’t necessarily go out because of too much Phlogiston (CO2) in the surrounding air, but because of a lack of something else, Oxygen.

The importance of Stahl’s work was not that he was right or wrong, but that Stahl was attempting to bring rigor and experimentation into science. In medicine and chemistry, Stahl believed in taking an empirical approach to his work. Ultimately, this was a stepping stone from the pseudoscience of alchemy to the real science of chemistry.


:istr makes a nucleophilic attack on chemy, resulting in the leaving group (Al) to leave and precipitate out.




Leave a comment

Posted by on June 14, 2016 in Uncategorized


Tags: , , , , , ,

My Wife Steered me to a New York Times Magazine Article that stirred thoughts of Glycolysis

Figure2a.pngThe late 19th / early 20th century was an interesting time to be alive. My wife and I have recently been reading about the lives of several people living at that time including, Elanor Roosevelt, Dietrich Bonhoeffer, and most recently for me, Nils Bohr. Reading about it naturally leads to talking about it and marveling at the way that this was a time of awakening across the world. Not quite the same as the Renaissance, but more with respect to the nations of the world becoming intertwined and the actions on one side of the globe had real repercussions on the other side. Others living at that time included Mark Twain(1835-1910),  Herman Melville(1819-1891), James Joyce(1882-1941), Franz Kafka(1883 – 1924), Pyotr Tchaikovsky(1840-1893), Johannes Brahms(1833-1897), Vincent Van Gogh(1853-1890), and Auguste Rodin(1840-1917).There are many brilliant minds at all points in history, we’ve recently had scientists such as Richard Dawkins and Craig Venter; Computer makers Steve Jobs and Bill Gates; Musicians Paul McCartney, Yo-Yo Ma and Joshua Bell; and filmmaker John Lasseter, to name a few (not to forget the Kardashians and Paris Hilton).

It was a time of great artists and great scientists. Mark Twain(1835-1910),  Herman Melville(1819-1891), James Joyce(1882-1941), Franz Kafka(1883-1924), Pyotr Tchaikovsky(1840-1893), Johannes Brahms(1833-1897), Vincent Van Gogh(1853-1890), and Auguste Rodin(1840-1917) could run into one another. Mark Twain could have taken Vincent Van Gogh for beer and an earful of clever conversation that acknowledged a crazy world, but didn’t fall into despair because of it.There are many brilliant minds at all points in history, we’ve recently had scientists such as Richard Dawkins and Craig Venter; Computer makers Steve Jobs and Bill Gates; Musicians Paul McCartney, Yo-Yo Ma and Joshua Bell; and filmmaker John Lasseter, to name a few -and lest we forget, we have the Kardashians and Paris Hilton to show us that, while the unexamined life may not be worth living, one populated with innumerable selfies is just HOT!

As there have been brilliant minds at all points in history, we too live in exciting times.  We live in times with scientists such as Richard Dawkins and Craig Venter who make it their duty not to just pursue science, but to share it with the rest of us; Computer makers Steve Jobs and Bill Gates revolutionized the amount of work on person or a small team can do; Musicians Paul McCartney, Yo-Yo Ma and Joshua Bell touch us and unite us with their music; and filmmaker John Lasseter brings life to the lifeless and makes cartoons parents can enjoy just as much as their children do. A.J. Abrams brought Star Wars back from the brink and reminded us why we fell in love with the franchise back before it was a franchise. to name a few (not to forget the Kardashians and Paris Hilton).

But, back to the article.

The  article she told me about that brought up the turn of the last century was part of an ongoing series about Cancer from the New York Times Magazine discussing the Warburg Effect, named for Otto Heinrich Warburg (1883 – 1970). Otto_Heinrich_Warburg_(cropped).jpg I knew of the effect, wherein tumor cells engage in aerobic glycolysis, primarily from the perspective of Craig Thompson’s work unravelling the link between Type II Diabetes and Cancer. That connection is based on Tumor cells’ overexpressing the glucose transporter, GLUT-4. The working model states that: given a sufficient supply of sugar and the ability to mop it up quickly via GLUT-4, the limiting factor in cell growth is not energy, but carbon.

It takes a lot of food to support rapidly growing cells (just look at teenagers). Much of that sugar goes to energy (not as readily apparent in teenagers), but a lot also goes to making the building blocks required for cellular proliferation. But to use the carbon in sugar for building rather than energy means that the sugar cannot be completely broken down to CO2 to be exhaled. Instead, cells break the sugar in half by glycolysis to make pyruvate for a net benefit of only 2ATP per glucose (as opposed to 36 possible). Then the intermediary molecules can be diverted to alternative synthesis pathways for those building blocks.

The basic reactions of Glycolysis are these:


However, the last enzyme in the pathway, Pyruvate Kinase can take two forms. The first is a tetrameric enzyme (M2-PK), which efficiently processes PEP into Pyruvate, which can either go on to be aerobically metabolized to generate more ATP or diverted to fermentation reactions.

An alternate, dimeric form, emerges when Pyruvate Kinase interacts with oncoproteins. This form (Tumor M2-PK) reduces the production of pyruvate to a trickle allowing for the buildup of metabolic intermediary molecules which may be diverted to alternate synthesis pathways for building materials.

An illustration comparing the pathway with either dimeric or tetrameric forms is shown here:


[The figure above came from Sybille Mazurek and has been modified for emphasis. Thank you Sybille!]

All this is a much more mechanistic description than Warburg was able to offer in 1924. At that time, it was recognized that tumor cells were switching to glycolysis even with sufficient O2 available, but the best explanation was that perhaps the mitochondria, where the aerobic reactions of cell respiration occur, were broken. He also thought that this disruption was actually the cause of cancer rather than a consequence of other factors leading to cancer and the switch to aerobic glycolysis amongst the sequelae of more fundamental problems.

So, despite the details of his hypotheses proving to be incorrect, what he did get right was the recognition of an important shift in metabolism that occurs in tumor cells.

A lot of research has gone into understanding cancer and into understanding diabetes. An unexpected connection between type II diabetes and cancer led to an unexpected synergy between their research efforts. The connection arises as a result of individuals with  type II diabetes overexpressing insulin as a compensatory measure.

Recall the definition of diabetes and the difference between the type I and type II varieties…

Diabetes is an inability to properly regulate the amount of sugar in the blood. When you eat, insulin levels increase to tell cells to take up the elevated blood sugar that comes soon after.

Type I diabetes is a result of the body destroying the pancreatic islet cells that produce insulin early in life so that the insulin signal never happens and unhealthy amounts of sugar accumulate in the blood.

Type II diabetes (formerly called ‘adult onset’ diabetes before children started getting it) is a result of cells becoming unresponsive to insulin. The association with obesity roughly means that cell so often see insulin that they become accustomed to it and don’t respond appropriately. This is very much like an addiction response. To compensate, the pancreas makes more and more insulin until, eventually, cells are so unresponsive that they just don’t do their job any more and unhealthy amounts of sugar accumulate in the blood.

Two pathways; same outcome.

What this has to do with cancer is that cancer cells are, by their nature, unbounded by many of the rules of other cells. The ones that outlive the others come to dominate the population and before you know it, they’re so numerous and


What this has to do with cancer is that cancer cells are, by their nature, unbounded by many of the rules of other cells. The ones that outlive the others come to dominate the population and before you know it, they’re so numerous and resistant to death that they become a health hazard. (If you’re thinking this sounds like evolution on a cellular scale, you’re thinking the right thing.)

One thing that gives one cell an advantage over other ‘lawabiding’ cells in the body is being greedy when food comes around. this is another central problem with cancer. In healthy organisms, cells ‘recognize’ their place and are willing to sacrifice themselves for the good of the body. Cancer cells have reneged on that agreement and look out only for #1.

On a cellular level, this means that they put up receptors for energy-rich molecules like sugar and take it whenever available. One example receptor cancer cells often use for this is GLUT-4. The very same receptor that we saw providing sugar for energy and carbon for building above. It turns out that Insulin binding to insulin receptors triggers the mobility of GLUT-4 receptors from intracellular vesicles to the cell surface. The environment that makes this all possible for tumor cells to do so well is one in which there is excess insulin – like in the circulation of someone who has type II diabetes and has been producing more and more insulin to try to coax cells to take glucose out of the blood.

The take home message:

  • Type II diabetics have very high levels of circulating insulin.
  • Cancer cells can use insulin signals to upregulate glucose-capturing receptors.
  • Cancer cells begin to favor dimeric form of pyruvate kinase.
  • Cancer cells can also perform aerobic glycolysis, the Warburg effect, to get both energy and biological building blocks from this sugar.
  • The cells that do it best have the most (cellular) progeny.

Therefore:   Obesity –> Type II Diabetes –> Cancer


Leave a comment

Posted by on May 30, 2016 in Uncategorized


Tags: , , , , ,

Why the Electoral College is Relevant Today


Ms. Ginny Stroud

No one cares one bit about the electoral college until they suddenly pivot to extremely caring about the electoral college. That happens around the time a close election teases out the distinction between popular election and the buffer against the masses we call the electoral college.


A common argument against the current system is straightforward: it is in place to prevent the will of the people from overriding the will of the elite. That’s not very democratic now is it? The electoral college  is the inverse of universal suffrage, which is the direction the United States has been heading for most of its existence (slowly). On the one hand, ‘No taxation without representation’ was part of the rationale for the revolution against the English Crown (I know what you’re thinking D.C., sit down and be quiet).

The 15th amendment in 1869 prohibited the denial of the right to vote based on race, color, or previous condition of servitude. The 19th amendment in 1919 prohibited the denial of the right to vote based on sex. In 1971, the 26th amendment reduced the voting age from 21 to 18, in large part to ensure that draft age men could have a voice.

Yet, don’t forget what Ms. Ginny Stroud, the Civics teacher in Dazed and Confused, reminded her students about July 4th as they finished their last days of school before Summer:

Don’t forget what you’re celebrating and that’s the fact that a bunch of slave-owning aristocratic white males didn’t want to pay their taxes.

And if there’s one thing that slave-owning aristocratic white males wanted, it was to keep their position of being slave-owning aristocratic white males safe. Hence, the electoral college. Or, to put it in a more Hamiltonian manner (from The Mode of Electing the President. From the New York Packet. Friday, March 14, 1788.):

It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations. It was also peculiarly desirable to afford as little opportunity as possible to tumult and disorder. This evil was not least to be dreaded in the election of a magistrate, who was to have so important an agency in the administration of the government as the President of the United States. But the precautions which have been so happily concerted in the system under consideration, promise an effectual security against this mischief.

The choice of SEVERAL, to form an intermediate body of electors, will be much less apt to convulse the community with any extraordinary or violent movements, than the choice of ONE who was himself to be the final object of the public wishes.

That’s right, don’t convulse the community with extraordinary or violent movements. That’s been one of my primary reasons for being a supporter of Hillary this whole election cycle, because the older I get, the more unsettlingScreen Shot 2016-05-07 at 9.27.53 PM.png a revolution sounds. And this is not because I don’t think that the country could be better. It’s because I also think that it could be a lot worse.

A little taste of what convulsing the community will get you can be had in reading The Economist‘s article ‘Tethered by History’ recounting the failings of the Arab Spring in bringing about change in the middle east. Maybe I’m over-reacting, but the last time I looked, the world economy has yet to find its footing again after the recession beginning in 2008. Fortunately, many of the aftershocks of that quake were dulled by the stabilizing influence of the EU in keeping countries like Greece and Portugal from becoming failed states. After all, nothing puts pressure on a marriage like financial problems.

That all said, the Presidential primaries have been a good, safe place for the parties to work out their own problems and figure out what they believe in.

With that in mind, the greatest good I see coming from the Bernie Sanders Campaign is that it has shown that a substantial portion of Democratic voters are further to the Left than where the Democratic Party has drifted in recent decades. Sanders is a sign to Hillary that moving to the Right, as many Democratic Nominees have done, is not the only way to garner more votes.

The good that I see coming from the Donald Trump Campaign is much the same. The Republican have been moving to the Right and building grassroots support especially amongst social conservatives since the time of Ralph Reed. Ted Cruz was an excellent example of this in the current election cycle, which had nothing but party non-conformists for Republicans to choose from with the exceptions of Rubio and Bush.

The harm of these candidates is that the only way to move the parties was to unmoor the safety lines and push. My only hope is that if either of these outsiders gets into office,  the governmental ground game that has resembled trench warfare for the past decade will continue to spend time doing nothing until we can put saner minds in control again.

Which brings me back to my initial position, that separating the fickle will of the people from the actual reigns of power may be the only thing that keeps the US from turning directly into the tempest. I’ve been accused before of being an elitist when it comes to government, but frankly, it’s because I would much rather have the best person in office than the loudest. Isn’t that what being an elite is about? Not being subject to the wind, but still sensing its influence and understanding what it means?

Just like so many other multiple choice questions, when none of the answers look correct, select the best from what you have. And if you can’t see which is the best available, don’t vote.Cthulhu 2016.jpg


Leave a comment

Posted by on May 7, 2016 in Uncategorized


Tags: , , , , , , , , ,

Dunning-Kruger and The Donald: torn between two topics

Like many other things in my life, I was made aware of the Dunning-Kruger Effect from listening to NPR. This time, it was my longest running favorite show, This American Life, that clued me in.

Briefly, the Dunning-Kruger Effect is is a cognitive bias in which relatively unskilled persons suffer illusory superiority, mistakenly assessing their ability to be much higher than it really is. The effect gets its name from the authors of the 199 paper, “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments” by Kruger, Justin; Dunning, David. Journal of Personality and Social Psychology, Vol 77(6), Dec 1999, 1121-1134

One of the things that Dunning (who was interviewed for the show) said was that the effect had become a ‘meme’ that was often mentioned on twitter. To test this, the show’s producer immediately went online, entered the name of the two authors, and came up with a tweet calling Donald Trump a perfect example of the effect.

Ha! this is gunna be huge!! I can see myself gliding off the rails…

It’s hard to not see Trump as an example of any number of  psychological conditions. In fact, I think that it might be this more than anything that has all of us (even the ones who don’t admit it) fascinated by the Trump spectacle.

And the pool answered,
‘But I loved Narcissus because,
as he lay on my banks and looked down at me,
in the mirror of his eyes I saw ever my own beauty mirrored.’

Of course, merely by saying this, we are all sharing the tongue-in-cheek agreement that we know, for a fact, that The Donald is not at all intelligent. The assumption is not just that he is no better than average, but that he is significantly below average. Which might be going a bit too far. A total moron would have lost all the money he ever got from dad, wouldn’t he? I suspect that we’re all just over compensating for Trump’s own excesses in regard to self-opinion.We react to the narcissist by knocking them down – all the way down.

Again, I’m losing control

The problem is not whether the man is smart or dumb, ignorant or wise. The problem is that we are about to hand the reigns of what is arguably the world’s most powerful country over to an amateur out of frustration that things aren’t going better than they are. Imagine using that same logic in hiring a plumber or electrician for your home. “I’m so sick of all the electrical problems this place is having, I think I’ll hire Brittany Spears to wire my house! She’s rich; she must know what she’s doing.”

While Brittany probably is willing to admit that she doesn’t know anything about electrical work (I’m assuming this is true, but I don’t know), the narcissist finds nothing outside of his ken. See this great article in Vanity Fair where physcologists participate in some armchair sport and analyze Trump’s mind.


From the 1999 paper

Getting back to the Dunning-Kruger effect, I think it’s worth noting that all groups in the data shown here believe that they’ve performed similarly. Dunning seemed to think that this was because the highest quarter was either modest or over estimating of others’ abilities, while the lowest two quarters were simply suffering from the using the same poor analytical skills in assessing themselves as they did in solving text questions.  How much is this just hand-waving to explain why we feel like the children of lake woebegone, all of us: above average?


The Figure 3 data above shows the Effect following examination of grammar ability. Nearly identical data resulted from similar examinations of logic and … humor? Apparently jokes were rated by commedians (in order to establish factual data on humor?) and then the participants were examined as with the other subjects.

Really though. How can you say that someone is incorrect on their ability to recognize funny. The problem I see with that data is that everyone – absolutely everyone, should have said that they scored 100% on that test.


Leave a comment

Posted by on April 28, 2016 in Uncategorized


Tags: , , , , ,


Get every new post delivered to your Inbox.

Join 1,072 other followers