True's beaked whale.jpg

Western spotted skunk

Hooded skunk

Yellow-throated Marten

Wolverine

Archive for the ‘Science’ Category

Messing with digestion

Sunday, May 2nd, 2010

There are several dietary products that try to minimize the calories absorbed in the GI tract. One was a non-absorbable fat, Olestra. The fat just runs through the GI tract with a side effect of diarrhea and occasional worse effects. There are also drugs that keep fat from being absorbed: Xenical, a drug that inhibits intestinal lipase and slows the breakdown and absorbtion of fat. Alli is a low dose of the same drug available OTC.

There are also several types of fiber sold as fat-trappers. It’s not clear whether they work, but they also have the same side effects as fat blockers.

There are other ideas that have been tried. Stimulants like amphetamines work fairly well with some well known side effects. So far, drugs that mess with the regulation of appetite haven’t worked well–the regulation has too many redundant pathways.

Rather than blocking fat, how about using enzymes to breakdown either sugars or fats? The simplest approach would be to use enzymes, I’m sure suitable ones could be found, grown in E. coli, isolated and taken as gel caps. To keep the proteins from getting denatured by stomach acids, a coated or time release capsule could be used. A second step would be to engineer the enzymes to resist digestion–synonomous substitutions and so on. Another possibility would be to express the enzymes as secreted proteins in a gut bacterium–one of the ones that mainly lives in the jejunum. The bacteria could be ingested in pills the way probiotics are.

A more difficult implementation would be to non-protein enzymes to digest sugar or fat. This would be harder to engineer but likely more effective.

Which enzymes? Well, that would take some study. Likely a two or three would be needed to break down the metabolite and then waste any ATP formed.

How big is a nanobot?

Sunday, April 11th, 2010

Nanobots are miniature molecular machines. So far they are just an idea as no one knows how to design or build one. But there is discussion of them, and certain properties be considered. For example, how big is a nanobot?

A ‘simple’, dumb miniature machine could be quite small, for example an antibody attached to a viral-like particle that binds to particular cells, get absorbed by the cell and opens to release the DNA into the cell. But that’s not a very interesting machine, the really interesting nanobots are miniature robots with sensors, computer logic to make decisions, and hands to grab or manipulate things.

So what’s the minimum size for a smart nanobot? It has 100,000 bytes of memory, 1,000,000 bits, and 10 atoms/bit. Let’s figure the same number of atoms for the computer logic. Add an equivalent number for energy storage and generation, structure, sensors, and manipulators. Thirty million atoms in total.

If it is mostly carbon, atoms will be 1 Å apart. Assume a spherical shape, and look at protein structures to estimate packing of atoms in a compact structure. From this, the core of the nanobot will be an estimated 1000 Å or roughly 100 nm.

An E. coli is roughly 1 &#181m long, so a nanabot would be a about a 1/10 the size of a bacteria. This is small, about the size of an average virus particle, small enough to exist inside cells. A nanobot is large enough to be recognized and engulfed by immune cells, and to need a specific mechanism to enter cells.

T4 bacteriophage

Measuring sea level

Thursday, April 8th, 2010

I’ve always wondered how sea level is measured so accurately. Global sea level changes are measured to high accuracy. The global yearly sea rise averaged over many years is measured to a fraction of a mm. There are two ways sea level is measured. The old system uses tidal gauges, and satellites were launched in 1992 (TOPEX/Poseidon) and 2005 (Jason-1).

Here’s a page explaining how the tidal gauges work: Univ. of Colorado. Ah, stilling well!

tidal gauge diagram

There’s nothing to intelligent design creationism

Friday, February 12th, 2010
flagellum electron micrograph
Composite electron micrograph of the flagellum basal body and hook, produced by rotational averaging (Francis et al., 1994).

Stephen M. Barr has an article in First Things, The End of Intelligent Design?. Unfortunately, Barr is looking to rescue something from intelligent design (ID) so his criticisms are muted. His main interest is whether ID has been useful in advancing religion and theology. In a faux even-handed approach he criticizes ID for not proving it’s claims but then tosses in criticism of scientists for unspecified excesses. He also tries to win favor with a religious audience by claiming that “the ID movement has been treated atrociously and that it has been lied about by many scientists”, a judgment he doesn’t substantiate.

The readership of First Things is a strange group, many of the comments go off in philosophical directions but no one is talks about the central issue–whether ID is true or false. Is there good evidence for it? Is it likely to be true? Could it be true? Or is it known to be false?

Barr’s article starts well, it is true that there’s “not a single phenomenon that we understand better today” through ID. To restate that, there is no evidence at all for ID and that is the reason ID has been dismissed by biologists.

When the idea that certain biological structures are “irreducibly complex” was proposed several examples were given: the bacterial flagellum, the immune system, the blood clotting cascade, the vertebrate eye, the Krebs cycle, etc. In fact, biologists have evolutionary models and physical evidence of how each of these things has evolved. No “irreducibly complex” structures were proposed and then proven to be so. In truth, none of the proposed examples are even open questions, things that puzzle biologists that could possibly be shown to be “irreducibly complex” in the future.

And the case for ID is really worse that what I’ve described. It’s not that ID theorists proposed structures that biologists didn’t have good evolutionary models for, structures that could have turned out to be “irreducibly complex”. When these examples were given, there was already published research explaining the evolutionary origins of each example. For example, biologists reviewing Behe’s book were able to look up and reference the research discounting his examples. No better “irreducibly complex” examples have come to light since then.

Scientific consensus

Thursday, February 4th, 2010

In a meta discussion about AGW, Eric Raymond writes about how the term scientific consensus is used in public science debates. He seems to misunderstand it, and think it is an ‘appeal to authority’ type of argument and thus a sign that the party that raises it has no more convincing arguments.

It certainly can be that sort of poor argument, but typically when raised by scientists it is something different. The scientific consensus on a topic is mentioned as a shorthand way of communicating what’s understood by scientists working the field to the public. Scientists are trying to communicate that certain things are known, and that contrary arguments that pick one or two studies and argue that the contrary opinion is *really* true or at least that no consensus exists are misleading. Either the study is part of a technical debate in the field among researchers who all understand and believe the consensus that is being misconstrued or too much weight is being given to the opinion of a rare contrarian.

The contrarians can be further divided into 1) cranks of various sorts and 2) scientists working on a contrary idea who understand that the evidence still favors the consensus but hope to make discoveries that will eventually tip the balance of evidence in favor of their idea. The second scientist will happily talk up his idea if asked about it, but if asked about the consensus will acknowledge that it is currently overall the best explanation.

So scientists will mention the scientific consensus on an issue to the public to ground the discussion with the fundamentals of what is known. With the fundamentals set down, scientists can then explain the details of how things are known, what discussions within the field are about, or discuss contrarians.

Book review: Ever Since Darwin

Thursday, January 28th, 2010

Ever Since Darwin cover
Ever Since Darwin is book of essays by Stephen Jay Gould, originally written as columns for Natural History magazine.

This is Gould’s first collection of essays, published in 1977. It’s a great introduction to Gould’s writing. The essays are shorter and the ideas are simpler than those in some of his later collections. There are great essays on the life and times Darwin worked in, and on how evolutionary and developmental biology got worked out through fits and starts. The last few essays on sociobiology are kind of weak. I guess 30 years on, it’s hard to really understand the ground that was being fought over.

Overall, a great book!

Creature matching game

Sunday, December 13th, 2009

Here’s an idea for a game. It would be like the kid’s game Memory, where cards are turned over and matches are taken off the playing area. Instead of using identical pictures, images of different animals or plants would be used. Any pair could be matched by a player. The play would be quite similar to standard Memory.

The idea would be to match organisms by evolutionary similarity. So scoring would give maximum points for animals of the same species, next most for same genus, fewer points for animals in the same order, and no points for creatures in different phyla. The easiest implementation would be as a computer game with the computer dealing with scoring. Alternatively cards could be made the lineage described on the back. Each classification category could be displayed a different color or with a different symbol and the first/highest point matching lineage symbol give the points for the match.

This would make the play interesting as any pair could be matched but the player would have to decide if a pair was good enough to pick up or to wait for a better and higher scoring pair next turn.

The design aspect of picking a card set could make an easy set or a hard set, and two aspect of the choice would affect this. First, if animals fall into close pair groups that are distantly related (two parrots, two foxes) then the set would be easier. Having graduated and overlapping groups of cards make the set harder (dog, fox, skunk, weasel, otter, raccoon). Also, how much the player knows of these animals and their relationships can make a card set easier or harder. Some groups are obvious–birds, whales, bats–while other animals are either not as well known (i.e., coatis) or don’t have an obvious lineage (i.e., wolverine). And all these examples are mammals. Invertebrates would make a ridiculously hard game! So sets for kids could be easy and moderate to hard sets can be created.

Here are three game sets:
Mammals, butterflies, and marine invertebrates.

Preventing wisdom teeth

Wednesday, December 2nd, 2009

I have thought for years that it should be possible to regrow teeth. Teeth should be one of the easiest body parts to regrow. It seemed likely that the tooth bud, once formed, would receive signals from its local environment and grow into the correct type of tooth and emerge into place. That’s what happens during normal adult tooth development. So generating a tooth bud looks to be the key step. And indeed, in the past few years there have been reports of progress from research in this area. See this news article and this paper from the Yelick lab in São Paulo, Brazil.

But much easier than growing teeth should be killing tooth buds. Specifically, if the buds of wisdom teeth were killed then the painful, expensive surgery to remove wisdom teeth could be avoided.

Tooth buds form during fetal development. Wikipedia has a detailed overview. Wisdom teeth don’t begin to calcify until a person is 7 to 10 years of age. It should be fairly easy to kill the tooth bud at early stages. An injection into the tooth bud of a localized cytotoxin, either a general one or perhaps one specific to dividing cells would kill the stem cells that form of the core of the tooth bud. A toxin dose that kills cells within a 1-2 mm radius of the injection site should be effective. The gums will heal up and then the tooth bud will be gone. The dentist should be able to pick the injection location based on the expected eruption site. Inspection of x-rays may help pinpoint the bud location. A jig could be used to precisely position the needle tip.

Googling briefly I don’t see any other mention of this idea. It would be easy to test experimentally in animals if one can be found with late enough tooth development.

Hobby molecular biology

Monday, November 16th, 2009

What would be required to set up an inexpensive system for hobbyists to experiment with biology? Consider PCR for example. PCR requires heat stable polymerase, primers, nucleotides, buffer.

The DNA polymerase is easy to purify from E. coli carrying the plasmid. Grow bacteria containing plasmid expressing Taq DNA polymerase, boil, spin down denatured proteins, and you are left with the DNA polymerase.

Primers can be bought inexpensively–$0.35 per base, a pair of 18-mers cost less than the shipping. Though they are inexpensive only if one set gets used repeatedly.

The cost of buffer (NaCl, MgCl2, Tris) is negligible.

Nucleotides are expensive up front, $150 for a set of dNTPs (dATP, dCTP, dGTP, dTTP), but this works out to about $0.06 per 50 ul PCR reaction.

Can nucleotides be prepared by a hobbyist? Nucleotides are easy to obtain-DNA is a major constituent of cells and is easy to purify. DNA + DNAase = dAMP, dCMP, dGMP, and dTMP. How can the trinucleotides be regenerated?

One route is to do it enzymatically using
polyphosphate:AMP phosphotransferase (PPT) and adenylate kinase (AdK) with polyphosphate (polyP) as the energy source (Resnick and Zehnder, 2000). It is not clear how the trinucleotide product would be separated and purified. Presumably different enzyme pairs could be used to regenerate the other dNTPs from the monophosphates.

These other enzymes could be cloned in E. coli expression vectors and purified either by tagging them with His6 and using a Ni or Co resin. Or by cloning heat-stable isoforms from one of the extremophiles and using a one-step boiling purification like that used for Taq polymerase.

Update: Bochkov et al., 2006 describe a method of preparing dNTPs from digested DNA. DNA is digested with DNAase and Nuclease S1. DNAase chews DNA into show oligonucleotides and the nuclease breaks them down to single dNMPs.

Then a crude extract of E. coli is prepared that contains the kinases to convert dNMPs to dNTPs along with the acetokinase. The kinases use ATP. ATP must be regenerated, and this is done using acetokinase with acetyl phosphate ($30/g) as an energy source. Combined dNMPs were converted to dNTPs with at least 86% regeneration and separated from reactants by chromatography on a Dowex 1×2 anion exchanger. The conversion was followed by thin-layer chromatography.

For PCR it may be possible to use a crude purification of nucleotides, but purification protocols would need to be developed and tested.

Water on the moon!

Monday, November 16th, 2009

Last October, NASA’s LCROSS mission slammed a spent rocket booster then the LCROSS spacecraft itself into the moon. No debris plume was seen from Earth, but observations from LCROSS of the booster hitting indicate the presence of water on the moon. How much water? Most news accounts don’t say, but the Science magazine article does.

100 kilograms of water was detected from an impact that created a crater estimated to be 20 m wide and 3 meters deep. So 100 kg water in about 500 m3 of regolith = 0.1 g/kg. (Googled a reference giving 2.3 to 2.6 g/cm3 as lunar regolith density).

The article gives a higher estimate for water, 0.1% to 10%, higher than my crude 0.01% estimate. Which is great–enough water to extract easily and live off. Best news for space exploration in thirty years!

LCROSS impact plume
(Credit: NASA)