Showing posts with label tech. Show all posts
Showing posts with label tech. Show all posts

Saturday, December 2, 2023

Preliminary Pieces of AI

We already live in an AI world, and really, it isn't so bad.

It is odd to hear about all the hyperventilating about artificial intelligence of late. One would think it is a new thing, or some science-fiction-y entity. Then there are fears about the singularity and loss of control by humans. Count me a skeptic on all fronts. Man is, and remains, wolf to man. To take one example, we are contemplating the election of perhaps the dummbest person ever to hold the office of president. For the second time. How an intelligence, artificial or otherwise, is supposed to worm its way into power over us is not easy to understand, looking at nature of humans and of power. 

So let's take a step back and figure out what is going on, and where it is likely to take us. AI has become a catch-all for a diversity of computer methods, mostly characterized by being slightly better at doing things we have long wanted computers to do, like interpreting text, speech, and images. But I would offer that it should include much more- all the things we have computers do to manage information. In that sense, we have been living among shards of artificial intelligence for a very long time. We have become utterly dependent on databases, for instance, for our memory functions. Imagine having to chase down a bank balance or a news story, without access to the searchable memories that modern databases provide. They are breathtakingly superior to our own intelligence when it comes to the amount of things they can remember, the accuracy they can remember them, and the speed with which they can find them. The same goes for calculations of all sorts, and more recently, complex scientific math like solving atomic structures, creating wonderful CGI graphics, or predicting the weather. 

We should view AI as a cabinet filled with many different tools, just as our own bodies and minds are filled with many kinds of intelligence. The integration of our minds into a single consciousness tends to blind us to the diversity of what happens under the hood. While we may want gross measurements like "general intelligence", we also know increasingly that it (whatever "it" is, and whatever it fails to encompass of our many facets and talents) is composed of many functions that several decades of work in AI, computer science, and neuroscience have shown are far more complicated and difficult to replicate than the early AI pioneers imagined, once they got hold of their Turing machine with its infinite potential. 

Originally, we tended to imagine artificial intelligence as a robot- humanoid, slightly odd looking, but just like us in form and abilities. That was a natural consequence of our archetypes and narcissism. But AI is nothing like that, because full-on humanoid consciousness is an impossibly high bar, at least for the foreseeable future, and requires innumerable capabilities and forms of intelligence to be developed first. 

The autonomous car drama is a good example of this. It has taken every ounce of ingenuity and high tech to get to a reasonably passable vehicle, which is able to "understand" key components of the world around it. That a blob in front is a person, instead of a motorcycle, or that a light is a traffic light instead of a reflection of the sun. Just as our brain has a stepwise hierarchy of visual processing, we have recapitulated that evolution here by harnessing cameras in these cars (and lasers, etc.) to not just take in a flat visual scene, which by itself is meaningless, but to parse it into understandable units like ... other cars, crosswalks, buildings, bicylists, etc.. Visual scenes are very rich, and figuring out what is in them is a huge accomplishment. 

But is it intelligence? Yes, it certainly is a fragment of intelligence, but it isn't consciousness. Imagine how effortless this process is for us, and how effortful and constricted it is for an autonomous vehicle. We understand everything in a scene within a much wider model of the world, where everything relates to everything else. We evaluate and understand innumerable levels of our environment, from its chemical makeup to its social and political implications. Traffic cones do not freak us out. The bare obstacle course of getting around, such as in a vehicle, is a minor aspect, really, of this consciousness, and of our intelligence. Autonomous cars are barely up to the level of cockroaches, on balance, in overall intelligence.

The AI of text and language handling is similarly primitive. Despite the vast improvements in text translation and interpretation, the underlying models these mechanisms draw on are limited. Translation can be done without understanding text at all, merely by matching patterns from pre-digested pools of pre-translated text, regurgitated as cued by the input text. Siri-like spoken responses, on the other hand, do require some parsing of meaning out of the input, to decide what the topic and the question are. But the scope of these tools tend to be very limited, and the wider scope they are allowed, the more embarrassing their performance, since they are essentially scraping web sites and text pools for question-response patterns, instead of truly understanding the user's request or any field of knowledge.

Lastly, there are the generative ChatGPT style engines, which also regurgitate text patterns reformatted from public sources in response to topical requests. The ability to re-write a Wikipedia entry through a Shakespeare filter is amazing, but it is really the search / input functions that are most impressive- being able, like the Siri system, to parse through the user's request for all its key points. This betokens some degree of understanding, in the sense that the world of the machine (i.e. its database) is parceled up into topics that can be separately gathered and reshuffled into a response. This requires a pretty broad and structured ontological / classification system, which is one important part of intelligence.

Not only is there a diversity of forms of intelligence to be considered, but there is a vast diversity of expertise and knowledge to be learned. There are millions of jobs and professions, each with their own forms of knowledge. Back the early days of AI, we thought that expert systems could be instructed by experts, formalizing their expertise. But that turned out to be not just impractical, but impossible, since much of that expertise, formed out of years of study and experience, is implicit and unconscious. That is why apprenticeship among humans is so powerful, offering a combination of learning by watching and learning by doing. Can AI do that? Only if it gains several more kinds of intelligence including an ability to learn in very un-computer-ish ways.

This analysis has emphasized the diverse nature of intelligences, and the uneven, evolutionary development they have undergone. How close are we to a social intelligence that could understand people's motivations and empathise with them? Not very close at all. How close are we to a scientific intelligence that could find holes in the scholarly literature and manage a research enterprise to fill them? Not very close at all. So it is very early days in terms of anything that could properly be called artificial intelligence, even while bits and pieces have been with us for a long time. We may be in for fifty to a hundred more years of hearing every advance in computer science being billed as artificial intelligence.


Uneven development is going to continue to be the pattern, as we seize upon topics that seem interesting or economically rewarding, and do whatever the current technological frontier allows. Memory and calculation were the first to fall, being easily formalizable. Communication network management is similarly positioned. Game learning was next, followed by the Siri / Watson systems for question answering. Then came a frontal assault on language understanding, using the neural network systems, which discard the former expert system's obsession with grammar and rules, for much simpler statistical learning from large pools of text. This is where we are, far from fully understanding language, but highly capable in restricted areas. And the need for better AI is acute. There are great frontiers to realize in medical diagnosis and in the modeling of biological systems, to only name two fields close at hand that could benefit from a thoroughly systematic and capable artificial intelligence.

The problem is that world modeling, which is what languages implicitly stand for, is very complicated. We do not even know how to do this properly in principle, let alone having the mechanisms and scale to implement it. What we have in terms of expert systems and databases do not have the kind of richness or accessibility needed for a fluid and wide-ranging consciousness. Will neural nets get us there? Or ontological systems / databases? Or some combination? However it is done, full world modeling with the ability to learn continuously into those models are key capabilities needed for significant artificial intelligence.

After world modeling come other forms of intelligence like social / emotional intelligence and agency / management intelligence with motivation. I have no doubt that we will get to full machine consciousness at some point. The mechanisms of biological brains are just not sufficiently mysterious to think that they can not be replicated or improved upon. But we are nowhere near that yet, despite bandying about the word artificial intelligence. When we get there, we will have to pay special attention to the forms of motivation we implant, to mitigate the dangers of making beings who are even more malevolent than those that already exist... us.

Would that constitute some kind of "singularity"? I doubt it. Among humans there are already plenty of smart people and diversity, which result in niches for everyone having something useful to do. Technology has been replacing human labor forever, and will continue moving up the chain of capability. And when machines exceed the level of human intelligence, in some general sense, they will get all the difficult jobs. But the job of president? That will still go to a dolt, I have no doubt. Selection for some jobs is by criteria that artificial intelligence, no matter how astute, is not going to fulfill.

Risks? In the current environment, there are a plenty of risks, which are typically cases where technology has outrun our will to regulate its social harm. Fake information, thanks to the chatbots and image makers, can now flood the zone. But this is hardly a new phenomenon, and perhaps we need to get back to a position where we do not believe everything we read, in the National Enquirer or on the internet. The quality of our sources may become once again an important consideration, as they always should have been.

Another current risk is that the automation risks chaos. For example in the financial markets, the new technologies seem to calm the markets most of the time, arbitraging with relentless precision. But when things go out of bounds, flash breakdowns can happen, very destructively. The SEC has sifted through some past events of this kind and set up regulatory guard rails. But they will probably be perpetually behind the curve. Militaries are itching to use robots instead of pilots and soldiers, and to automate killing from afar. But ultimately, control of the military comes down to social power, which comes down to people of not necessarily great intelligence. 

The biggest risk from these machines is that of security. If we have our financial markets run by machine, or our medical system run by super-knowledgeable artificial intelligences, or our military by some panopticon neural net, or even just our electrical grid run by super-computers, the problem is not that they will turn against us of their own volition, but that some hacker somewhere will turn them against us. Countless hospitals have already faced ransomware attacks. This is a real problem, growing as machines become more capable and indispensable. If and when we make artificial people, we will need the same kind of surveillance and social control mechanisms over them that we do over everyone else, but with the added option of changing their programming. Again, powerful intelligences made for military purposes to kill our enemies are, by the reflexive property of all technology, prime targets for being turned against us. So just as we have locked up our nuclear weapons and managed to not have them fall into enemy hands (so far), similar safeguards would need to be put on similar powers arising from these newer technologies.

We may have been misled by the many AI and super-beings of science fiction, Nietzsche's Übermensch, and similar archetypes. The point of Nietzsche's construction is moral, not intellectual or physical- a person who has thrown off all the moral boundaries of civilization, expecially Christian civilization. But that is a phantasm. The point of most societies is to allow the weak to band together to control the strong and malevolent. A society where the strong band together to enslave the weak.. well, that is surely a nightmare, and more unrealistic the more concentrated the power. We must simply hope that, given the ample time we have before truly comprehensive and superior artificial intelligent beings exist, we have exercised sufficient care in their construction, and in the health of our political institutions, to control them as we have many other potentially malevolent agents.


  • AI in chemistry.
  • AI to recognize cells in images.
  • Ayaan Hirsi Ali becomes Christian. "I ultimately found life without any spiritual solace unendurable."
  • The racism runs very deep.
  • An appreciation of Stephen J. Gould.
  • Forced arbitration against customers and employees is OK, but fines against frauds... not so much?
  • Oil production still going up.

Saturday, March 26, 2022

A Brief History of DNA Sequencing

Technical revolutions that got us to modern DNA sequencing.

DNA is an incredibly elegant molecule- that much was apparent as soon as its structure came out. It is structurally tough, and its principles of information storage and replication are easy to understand. It is one instance where evolution came with, not a messy hack, but brilliant simplicity, which remains universal over all the life that we know. While its modeled structure was immediately informative, it didn't help to figure out its most important property- its sequence. Methods to sequence DNA have gone through an interesting evolution of their own. First were rather brutal chemical methods which preferentially cut DNA at certain nucleotides. Combined with the hot new methods of labeling the DNA with radioactive P32, and of separating DNA fragments by size by electically pushing them (electrophoresing) through a jello-like gel, this could give a few base pairs of information.

A set of Maxam-Gilbert reactions, with the DNA labeled with 32P and exposed to X-ray film after being separated by size by electrophoresis through a gel. Smallest are on the bottom, biggest fragments on on the top. Each of the four reactions cleaves at certain bases, as noted at the top. The intepretation of the sequence is on the right. PvuII is a bacterial enzyme that cleaves DNA, and this (palindromic) sequence noted at the bottom is the site where it does so.

Next came the revolution led by Fred Sanger, who harnessed a natural enzyme that polymerizes DNA in order to sequence it. By providing it with a mixture of natural nucleotides and defective ones that terminate the extension process, he could easily develop far bigger assortments of DNAs of various lengths (that is, reads) as well as much higher accuracy of base calling. The chemistry of the Maxam-Gilbert chemical process was quite poor in base discrimination. This polymerase method also eventually used a different isotope to trace the synthesized DNAs, S35, which is less powerful than P32 and gave sharper signals on film, which was how the DNA fragments were visualized after laid out and ordered by size, by electrophoresis.

The Sanger sequencing method. Note the much longer read length, and cleaner reactions, with fully distinct base specificity. dITP was used in place of dGTP to help clarify G/C-rich regions of sequence, which are hard to read due to polymerase pausing and odd behavior in gel electrophoresis. 

There have been many technological improvements and other revolutions since then, though none have won Nobel prizes. One was the use of fluorescent terminating nucleotides in place of radioactive ones. In addition to improving safety in the lab, this obviated the need to generate four different reactions and run them in separate lanes on the electrophoretic gel. Now, everything could be mixed into one reaction, with four different terminating fluorescent nucleotides in different colors. Plus, the mix of synthesized DNA products could now be run through a short bit of gel held in a machine, and a light meter could see them come off the end, in marcing order, all in an automated process. This was a very significant advance in capacity, automatability, and cost savings.

Fluorescent terminating nucleotides facilitate combined reactions and automation.

After that came the silicon chip revolution- the marriage between Silicon Valley and Biotech. Someone discovered that silicon chips made a good substrate to attach DNA, making possible large-scale matrix experiments. For instance, DNA corresponding to each gene from an organism could be placed at individual positions across such a chip, and then experiments run to hybridize those to bulk mRNA expressed from some organ or cell type. The readout would then be fluorescent signals indicating the level of expression of each gene- a huge technical advance in the field. For sequencing, something similar was attempted, laying down all possible 8 or 9-mers across such a chip, hybridizing the sample, thereby trying to figure out all the component sequences of the sample. The sequences were so short, however, that this never worked well. Assembling a complete sequence from such short snippets is nearly impossible.

What worked better was a variation of this method, where the magic of DNA synthesis was once again harnessed, together with the matrix layout. Millions of positions on a chip or other substrate have short DNA primers attached. The target DNA of interest, such as someone's genome, is chopped up and attached to matching primers, then hybridized to this substrate. Now a few amplification steps are done to copy this DNA a bunch of times, all still attached in place to the substrate. Finally, complementary strands are all melted off and the single DNA strands are put through a laborious step-by-step chemical synthesis process, similar to how artifical DNA is made to order, across the whole apparatus, with chemicals successively washed through. No polymerase is used. Each step ends with a fluorescent signal that says what the base that just got added was at that position, and a giant camera or scanner reads the plate after each pass, adding +1 to the sequence of each position. The best chemical systems of this kind can go to 150 or even 300 rounds (i.e. base pairs), which, over millions of different DNA fragments from the same source, is enough to then later re-assemble most DNA sequences, using a lot of computer power. This is currently the leading method of bulk DNA sequencing.

A single DNA molecule being sequenced by detecting its progressive transit through a tiny (i.e. nano) pore, with corresponding electrical readout of which base is being wedged through.

Unfortunately, our DNA has lots of repetitive and junky areas which read sizes of even 300 bases can not do justice to. We have thousands of derelict transposons and retroviruses, for instance, presenting impossible conundrums to programs trying to assemble a complete genome, say, out of ~200 bp pieces. This limitation of mass-sequencing technologies has led to a niche market for long-read DNA sequencing methods, the most interesting of which is nanopore sequencing. It is almost incredible that this works, but it is capable of reading the sequence of a single molecule of single stranded DNA at a rate of 500 bases per second, for reads going to millions of bases. This is done by threading the single strand through a biological (or artifical) pore just big enough to accommodate it, situated in an artifical membrane. With an electrical field set across the membrane, there are subtle fluctuations detectable as each base slips through, which are different for each of the four bases. Such is the sensitivity of modern electronics that this can be picked up reliably enough to read the single thread of DNA going through the pore, making possible hand-held devices that can perform such sequencing at reasonable cost.

All this is predicated on DNA being an extremely tough molecule, able to carry our inheritance over the decades, withstand rough chemical handling, and get stuffed through narrow passages, while keeping its composure. We thought we were done when we sequenced the human genome, but the uses of DNA sequencing keep ramifying, from forensics to diagnostics of every tumor and tissue biopsy, to wastewater surveillance of the pandemic, and on to liquid biopsies that promise to read our health and our future from a drop of blood.


Saturday, December 11, 2021

Cooking With Solar

Who knew cooking with energy from the sun would be so difficult?

Cooking with rays from the sun- what could be more delightful, or more efficient? The same rays that warm the skin can heat food as well- one merely needs to concentrate the heat a few fold. Well, doing so is remarkably difficult to do in practical terms. Not only do you need to concentrate the sun's heat, but then you have to preserve the heat you collect, without blocking out the light with all that insulation. This can be quite a trick. Thermostatic control? You must be joking- none of the currently sold or proffered DIY projects incorporate such an extravangance. The current state of play is a slightly demented world of youtube videos, fly-by-night companies, and charitable efforts pointed at developing regions. But rest assured, it can be done.

Naturally, the most significant drawback is that the sun doesn't shine all the time, confining solar cooking to mid-day times, and sunny conditions. Several kinds of cookers have been developed, each with individual drawbacks and features. 

  • Parabolic stove
  • Vacuum tube oven
  • Closed box oven
  • Open panel oven

First off, the parabolic solution puts the premium on power. While the other cookers are akin to ovens, this one is more like a range / stove. It gets extremely hot and cooks in a hurry. The concentrated light from the sun needs, however, to be constantly tracked and aimed at the pan on the burner. Yet it is an invisible flame, presenting some difficulty. It can burn a finger or blind you in an instant. One company developed a reasonably practical design, complete with glowing video. But then it promptly shut down and disappeared, I assume due to the daunting legal liability implied in selling such an appliance. These cookers remain very much a DIY, and at your own risk, proposition.

A parabolic cooker- adjust often, and use with care!

Second are vacuum tube ovens, which are basically thermos bottles with sun-facing inputs. These have outstanding insulation, so they capture the radiation coming in very effectively, storing it as heat. They can be used in cloudy conditions and maybe in non-mid-day conditions. The downside is that the thermos structure limits capacity for food, and also hides it from view. These also come in water-heating versions, filling a core camping and emergency need.

A vacuum tube style of oven. This one has quite high capacity. The central thermos provides extremely effective insulation, collecting every bit of the insolation.

Third are closed-box ovens, which are perhaps the most widely used form of solar cooking. Given enough insulation and a well-sealed glass top, you can make a reasonably practical oven out of cardboard boxes, wood, or metal, which get up to 350 degrees °F. This is a slow kind of cooker, perhaps more like a crockpot than an oven, taking quite a bit of time to heat up. They are not so sensitive to light direction, so can be left out for lazy afternoon and will still work. This is an amazingly active area of DIY activity, with endless variations. One of the most impressive I have seen is a sleek, low oven build of glass and wood, meant to stay outside full time.

 A commercially made box oven, with glass top and room for one or two pots.

A DIY version of a box oven, with clean lines and very high capacity.

Lastly, a more portable version of a solar oven is an open panel oven, where a set of foldable or collapsable reflective panels surround the pot, without much other structure. These are maximally simple, and aimed at camping and other portable needs. But they need something extra to hold in the heat around the pot, which may be a plastic oven bag, or a pair of glass bowls that go around the black pot inside. When properly protected, set up, and with large enough collectors, these can get to 300 degrees and work well cooking stews, rice, etc. These enjoy a wide variety of DIY efforts and styles as well, and one of the best is offered by a maker in Southern California.

A panel cooker being used on the go. Note the glass bowl holding the central pot.

Those are the current types, each with its pluses and minuses. Once one considers solar cooking, it is natural to want to deploy it to those who really need it- the rural and poor around the world, who have lots of sun, and not many other resources. The scourge of traditional cooking fuels in these areas is particularly alarming, usually being wood, coal, or dung, which lead to deforestation, climate change, land depletion, and copious pollution, both indoor and outdoor. Thus solar cooking becomes another sort of colonial dream foisted on the less fortunate, who have not set up proper infrastructure to pillage the earth and pollute the atmosphere. But the various impracticalities of solar cooking, including inconvenient timing, outdoor location, low capacity, slow speed, unusual, non-local, and fragile materials, have doomed such efforts to marginal effectiveness. Maybe some further leap in the technology, like incorporating a heat storage mechanism (rocks?) might solve some of these problems. It is amazing, really, how convenient the stored /reduced forms of carbon (in biomass and fossil fuels) are for our needs, and how hard they are to replace.


  • Shades of WW2: All Russia wants is a little elbow room.
  • The gravitational wave observatories are running, and recording the death spirals of black holes.
  • The next presidential election could start a civil war.
  • Carbon tax, now.
  • Good sleep, good life.

Saturday, August 7, 2021

Covid Will Never End

But it will be a very small problem, once everyone is vaccinated.

It should be obvious by now that Covid-19 is endemic and will be with us forever. At a fatality rate of roughly 2% for the unvaccinated, it is better than the bubonic plague (50%) and smallpox (30%), but far worse than influenza (0.1%), not to mention colds and other minor respiratory infections. With vaccination, the fatality rate is reduced to, in very rough terms, 0.05%. Thus, with vaccination, Covid-19 is a much less significant public health problem, superceded by influenza, whose vaccine is much less effective.

(This calculation, of the death rate, in vaccinated people, is rather fraught, because the infection rate is hard to gauge. But assuming that over the four months when roughly one third to one half the population has become vaccinated, and exposure rates of this population similar to that of the unvaccinated and productively infected population, the overall death toll was roughly 50,000 people, of which 1,263 were vaccinated, for a ratio of 40:1)

While breakthrough infections and consequences like hospitalization and death (and possibly long covid, though that is unclear) are not impossible for vaccinated people, they are rare enough that we can resume normal activities. Current policies to limit the spread of the virus, even by vaccinated people who can carry and transmit it via light infections, is mostly aimed at the remaining unvaccinated population, who will be ending up in the hospital at much higher rates, and creating the public health burden. So no wonder patience is wearing thin with the unvaccinated, who will eventually just be cut loose to take their chances while the rest of society moves on in a new world where covid is as or even more manageable than influenza.

Why is Covid less severe in children? ACE2, the key receptor for the virus seems to have lower expression naturally, and is driven even lower by incidental conditions like asthma and allergies. Other cold viruses, to which children are widely exposed, may have "pre-vaccinated" them to the new coronavirus. And children seem to produce fewer inflammatory cytokines, producing a less exaggerated immune response, which is the main factor in later Covid pathology.

Why all the breakthrough infections? One issue is that vaccination primes the immune system, which does not prevent infection, actually. What it does is to shorten the time that the body needs to fight an infection that has already occurred, by pre-educating the immune system about the target it is facing. So vaccinated people are going to be infected at normal rates, but they just won't show symptoms nearly as frequently. And second, as widely discussed, the vaccines have great, but not perfect effectiveness. It stands to reason, as has been widely reported, that more vaccines are better than fewer, and as the virus mutates to meet our weapons of social distancing and vaccination, new editions of covid vaccines will be needed. There can never be enough education of our immune systems against these evolving threats. With the advent of successful mRNA vaccines that can be rapidly programmed with new immunogens, we have the opportunity to increase our protection against both new threats, in form of yearly (or more) covid boosters, and against old threats, like influenza, whose vaccines are stuck in a time warp of antiquated technology and poor effectiveness.

This all implies that we (the vaccinated population) will be spreading around covid on an ongoing basis. It will be endemic, and our protection will be by vaccination rather than isolation. The virus has little interest in killing us, so it will likely evolve to be more benign, as our countless cold viruses have done, thereby spreading more effectively in a well-mixed population.

The extremely urgent need for universal vaccination raises the question of why the FDA has not been faster in its authorizations. All children should have already been cleared for vaccination, and full authorization should already have been granted for adults. The safety and efficacy data is present in overwhelming amounts, and if not, (in the case of children), the studies should have been started much sooner, and run on compressed schedules. One gets the impression that this is a bureaucracy that is overly wedded to process, rather than data- particularly the critical interpretation of data that comes from actual use in the field, rather from corporate reports. And this slowness has implications for future vaccines, such as ones against influenza, as well. We deserve better from our public institutions.


  • R0, vaccination rates, etc.
  • Can vaccinated people get long covid? Maybe.
  • What are those breakthrough cases like?
  • Two is better than one.
  • Variants and vaccines.
  • Ever wonder why religious people are gullible?
  • Crypto is not a currency, it is a gamble and tax dodge.
  • Gene editing is now a thing.

Saturday, March 6, 2021

Prospects for Hydrogen

What are the prospects for hydrogen as part of a sustainable, green economy?

Hydrogen is perennially spoken of as a fuel of the future- clean, renewable, light. It is particularly appealing in an environment (like that of California) where solar energy is having a huge impact on the grid and causing rising portions of solar production to be "curtailed". That is, turned off. But even in California, solar power has hardly scratched the surface. Only few roofs have solar and the potential for more power production is prodigious. Over time, as more renewable sources of energy come on line, the availability of excess power at peak times will rise dramatically, prompting a huge need for storage, or other ancillary uses for excess power. Many storage schemes exist or are under development, from traditional water pumping to batteries, flywheels, gravitational weights, etc. Hydrogen is one of them, spoken of as a versatile storage and fuel medium, which can be burned, or even more efficiently put through fuel cells, to return electrical power.

A typical day on California's electrical grid. The top teal line is total demand, and the purple zone is power not supplied by renewables like wind, hydropower, and solar. During the mid-day, most power now comes from solar, an amazing accomplishment. Roughly 2 GW are even turned off at the highest peak time, due to oversupply, either locally or regionally. How could that energy be put to use?

Unfortunately, as a fuel, hydrogen leaves much to be desired. We have flirted with hydrogen-powered cars over the last couple of decades, and they have been a disaster. Hydrogen is such an awkward fuel to store that battery-powered electric vehicles have completely taken over the green vehicle market, despite their slowness in refueling. The difficulties begin with hydrogen's ultra-low density. The Sun has the gravitational wherewithal to compress hydrogen to useful proportions, at the equivalent of 100,000 earth atmospheres and up. But we on Earth do not, and struggle with getting hydrogen in small enough packages to be useful for applications such as transport. The prospect of Hinden-cars is also unappealing. Lastly, hydrogen is corrosive, working its way into metals and weakening them. Transforming our natural gas system to use green hydrogen would require replacing it, essentially.

The awkwardness, yet usefulness, of (reduced) hydrogen as an energy currency in an oxygenated atmosphere is incidentally what led life during its early evolution to devise more compact storage forms, i.e. hydro-carbons like fats, starches and sugars. And these are what we dug up again from the earth to fuel our industrial, technological, and population revolutions.

But how useful is hydrogen for strictly in-place storage applications, like load balancing and temporary grid storage? Unfortunately, the news there is not good either. Physical storage remains an enormous problem, so unless you have a handy sealed underground cavern, storage at large scales is impractical. Second, the round-trip efficiency of making hydrogen from water by electrolysis and then getting electricity back by fuel cell (both rather expensive technologies) is roughly 35 to 40%. This compares unfavorably to the ~95% efficiency of electrical batteries like Li ion, and the 80% efficiency of pumped water/gravity systems. Hydrogen here is simply not a leading option.

Does that mean we are out of luck? Not quite. It turns out that there already is a hydrogen economy, as feedstock for key chemical processes, especially ammonia and fertilizer production, and fossil fuel cracking, among much else. Global demand is 80 million tons per year, which in electrical terms is 3-4 tera watt hours. That is a lot of energy, on the order of total demand on the US electric grid, and could easily keep excess power generator's hands full for the foreseeable future. Virtually all current hydrogen is made from natural gas or coal, so the green implications of reforming this sector are obvious. It already has storage and pipeline systems in place, though not necessarily at locations where green energy is available. So that seems to be the true future of hydrogen, not as a practical fuel for the economy in general, but as a central green commodity for a more sustainable chemical industry.


Saturday, July 20, 2019

We'll Keep Earth

The robots can have the rest of the universe.

The Apollo 11 aniversary is upon us, a wonderful achievement and fond memory. But it did not lead to the hopeful new-frontier future that has been peddled by science fiction for decades, for what are now obvious reasons. Saturn V rockets do not grow on trees, nor is space, once one gets there, hospitable to humans. Earth is our home, where we evolved and are destined to stay.

But a few among us have continued taking breathtaking adventures among the planets and toward other stars. They have done pirouettes around the Sun and all the planets, including Pluto. They are our eyes in the heavens- the robots. I have been reading a sober book, Nick Bostrom's Superintelligence, which works through in painstaking, if somewhat surreal, detail what artificial intelligence will become in the not too distant future. Whether there is a "singularity" in a few decades, or farther off, there will surely come a time when we can reproduce human level intelligence (and beyond) in machine form. Already, machines have far surpassed humans in memory capacity, accuracy, and recall speed, in the form of databases that we now rely on to run every bank, government, and app. It seems inescapable that we should save ourselves the clunky absurdity, vast expense, and extreme dangers of human spaceflight and colonization in favor of developing robots with increasing capabilities to do all that for us.

It is our fleet of robots that can easily withstand the radiation, weightlessness, vacuum, boredom, and other rigors of space. As they range farther, their independence increases. On the Moon, at 1.3 light seconds away, we can talk back and forth, and control things in near real time from Earth. The Mars rovers, on the other hand, needed to have some slight intelligence to avoid obstacles and carry out lengthy planned maneuvers, being roughly 15 light-minutes from Earth. Having any direct control over rovers and other probes farther afield is increasingly impossible, with Jupiter 35 minutes away, and Neptune four light hours away. Rovers or drones contemplated for Saturn's interesting moon Titan will be over a light hour away, and will need extensive autonomous intelligence to achieve anything.

These considerations strongly suggest that our space program is, or should be in large part joined with our other artificial intelligence and robotics activities. That is how we are going to be able to achieve great things in space, exploring far and wide to figure out how we came to be, what other worlds are like, and whether life arose on them as well. Robots can make themselves at home in the cosmos in a way that humans never will.

Matt Damon, accidentally marooned on Mars.

Bostrom's book naturally delves into our fate, once we have been comprehensively outclassed by our artificial creations. Will we be wiped out? Uploaded? Kept as pets? Who knows? But a reasonable deal might be that the robots get free reign to colonize the cosmos, spreading as far as their industry and inventiveness can carry them. But we'll keep earth, a home for a species that is bound to it by evolution, sentiment, and fate, and hopefully one that we can harness some of that intelligence to keep in a livable, even flourishing, condition.


Saturday, June 15, 2019

Can Machines Read Yet?

Sort of, and not very well.

Reading- such a pleasure, but never time enough to read all that one would like, especially in technical fields. Scholars, even scientists, still write out their findings in prose- which is the richest form of communication, but only if someone else has the time and interest to read it. The medical literature is, at the flagship NCBI Pubmed resource, at about 30 million articles in abstract and lightly annotated form. Its partner, PMC, has 5.5 million articles in full text. This represents a vast trove of data which no one can read through, yet which tantalizes with its potential to generate novel insights, connections, and comprehensive and useful models, were we only able to harvest it in some computable form.

That is one of the motivations for natural language processing, or NLP, one of many subfields of artificial intelligence. What we learn with minimal effort as young children, machines have so far been unable to truly master, despite decades of effort and vast computational power. Recent advances in "deep learning" have made great progress in pattern parsing, and learning from large sets of known texts, resulting in the ability to translate one language to another. But does Google Translate understand what it is saying? Not at all. Understanding has taken strides in constricted areas, such as phone menu interactions, and Siri-like services. As long as the structure is simple, and has key words that tip off meaning, machines have started to get the hang of verbal communication.

But dealing with extremely complex texts is another matter entirely. NLP projects directed against the medical literature have been going on for decades, with relatively little to show, since the complexity of the corpus far outstrips the heuristics used to analyze it. These papers are, indeed, often very difficult for humans to read. They are frequently written by non-English speakers, or just bad writers. And the ideas being communicated are also complex, not just the language. The machines need to have a conceptual apparatus ready to accommodate, or better yet, learn within such a space. Recall how perception likewise needs an ever-expanding database / model of reality. Language processing is obviously a subfield of such perception. These issues raises a core question of AI- is general intelligence needed to fully achieve NLP?


I think the answer is yes- the ability to read human text with full understanding assumes a knowledge of human metaphors, general world conditions, and specific facts and relations from all areas of life which amounts to general intelligence. The whole point of NLP, as portrayed above, is not to spew audio books from written texts, (which is already accomplished, in a quite advanced way), but to understand what it is reading fully enough to elaborate conceptual models of the meaning of what those texts are about. And to do so in a way that can be communicated back to us humans in some form, perhaps diagrams, maps, and formulas, if not language.

The intensive study of NLP processing over the Pubmed corpus reached a fever pitch in the late 2000's, but has been quiescent for the last few years, generally for this reason. The techniques that were being used- language models, grammar, semantics, stemming, vocabulary databases, etc. had fully exploited the current technology, but still hit a roadblock. Precision could be pushed to ~ %80 levels for specific tasks, like picking out the interactions of known molecules, or linking diseases with genes mentioned in the texts. But general understanding was and remains well out of reach of these rather mechanical techniques. This is not to suggest any kind of vitalism in cognition, but only that we have another technical plateau to reach, characterized by the unification of learning, rich ontologies (world models), and language processing.

The new neural network methods (tensorflow, etc.) promise to provide the latter part of the equation, sensitive language parsing. But from what I can see, the kind of model we have of the world, with infinite learnability, depth, spontaneous classification capability, and related-ness, remains foreign to these methods, despite the several decades of work lavished on databases in all their fascinating iterations. That seems to be where more work is needed, to get to machine-based language understanding.


  • What to do about media pollution?
  • Maybe ideas will matter eventually in this campaign.
  • Treason? Yes.
  • Stalinist confessions weren't the only bad ones.
  • Everything over-the-air ... the future of TV.

Saturday, December 29, 2018

Solar Power is Not as Easy as it Looks

Adding the first increment to the grid is far easier than adding the last, if we want to decarbonize electricity. Review of "Taming the Sun", by Varun Sivaram

Global warming is no longer a future problem, but a now problem, and getting rapidly worse. We need a total societal focus on extricating ourselves from fossil fuels. Putting aside the brain-dead / know-nothing ideology of the current administration, the world is broadly, if grudgingly, onboard with this program. What is lacking are the political will and technical means to get there. California now gets 29% of its electricity (including imports from other states) from renewables, of which 10% is photovoltaic (PV) solar power. The grid operator shows a pleasing daily graph of solar power taking over one-third of electricity demand around mid-day.

A typical day on California's power grid. at mid-day, and fair portion of the state's power comes from solar power (teal). But come sundown, many other plants need to ramp up to provide for peak demand.
 
Varun Sivaram's book is an earnest, somewhat repetitious though well-written and detailed look at why this picture is misleading, and what it will really take to go the rest of the way to decarbonization. Solar power has very bad characteristics for electrical grid power- the grid operator has no control over when it comes in, (it is not dispatchable), and it all tends to come in at the same time of day. While this time (mid-day) is typically one of heavy usage, it is not the peak of usage, which comes during the transition to cooking and evening activities, from 5 to 7 PM. This means that not only does the rest of the grid have to work around solar's intermittency, but the rest of the grid has to constitute a full fleet of power plants for peak needs- solar will not reduce the need for either baseline or peak power capacity.

This is extremely disappointing, and means that adding the first 10% of solar to the grid is relatively easy, but adding more becomes increasingly difficult, and offloads rising expenses to other parts of the system. We do not have the technical means to economically address these issues yet. Solutions come in two basic forms- energy storage, or alternative modes of non-CO2 emitting generation.

Storage technologies by current capacity and capability. Pumping water uphill into reservoirs is the only existing method of storing power in grid-scale amounts over long periods.

Storage is easy to understand. If we could only bottle all that solar electricity somehow, all would be well. Even if we can't save summer power for winter, but save it only for a few days, we could build enough solar generation capacity (at the current cheap and falling prices) to cover our needs at the lowest production time of year, and throw away the excess the rest of the year. This assumes that, over a suitably large geographic area, there will not be so much extended cloud cover that this could not be reasonably planned. But such storage technology simply does not exist yet. The diagram above mentions some of the major candidates. The best known are chemical batteries, like lithium ion. This is how off-grid and home backup systems manage the intermittency of solar power. But these are expensive, which is why it is cheaper to buy power from the local utility than to go off-grid, and also cheaper to build a grid-tied solar system than go off-grid. The most mature grid-scale storage technology is hydropower- pumping water back uphill into a reservoir. This is obviously not available in most places where storage is needed.

Where various storage technologies are in development.

Other methods like flywheels, raising and lowering rocks, etc. are all on the drawing board, but not yet in practical deployment at grid scale, or even demonstrated to be economic at that scale. Making fuels like hydrogen or hydrocarbons from solar energy is another prospect for storage, but again are not currently economical. Hydrogen has been touted as the all-around fuel of the future for many uses, but is so difficult to handle that, again, it is far from currently practical. Getting there will take money and effort. 2050 is when we need the power sector substantially decarbonized, world-wide (if not sooner!). It sounds far off, but it is only about 30 years- a very short time in power technology terms. The scale needed is also gargantuan, so we need these solutions to get off the drawing board as soon as possible- there is no time to waste.

The alternative methods of no-carbon generation are currently wind and nuclear, with CO2 storage (sequestration) from fossil fuel plants as a further option. Carbon sequestration is not a new technology, and is something that would be directly motivated by a carbon tax, though it is also phenomenally wasteful (as are many of our more adventurous methods of producing fossil fuels, like tar sands)- a fair fraction of the energy produced goes right back into compressing and pumping the CO2 back underground. Wind is also getting to be a mature technology, and shares with solar the problem of intermittency, so is not a solution for dispatchable or baseline power. Sivaram does note at length, however, that a helpful technology for both solar and wind is long-distance DC transmission, which would allow rich sources, like the plains states, or the Sahara, to be connected to heavy users.

The dream of the next generation of nuclear power, which has not been demonstrated at grid scale.

That leaves nuclear power as an important element in future power systems. Generation IV nuclear power promises cleaner, proliferation-proof, more efficient, and more sustainable nuclear power. China has several programs in development, as does the US. Again, as with all the other necessary technologies for a fully sustainable grid, these are not mature technologies, and need a great deal of research and development to come to fruition. I will not even delve into fusion power, which is not demonstrated terrestrially in principle, let alone development.

The point of all this, as made at some length by Sivaram, is that the key to getting to a decarbonized future (for electricity, the easiest energy sector to deal with) lies not simply in scaling up the PV present into a glorious future. Rather, it lies in further intensive research and development of a variety of complementary technologies. The next question naturally is: will the private sector get us there, even if there were a carbon tax? The answer is- unlikely. The Silicon Valley model of venture capital is not well-suited to the energy sector, where innovation comes in small increments, the regulatory weather is heavy, and the scale in time and capital to money-making deployment is huge. There needs to be continued, and vastly expanded, government direction of the research, along with much other public policy, to address this crisis.


  • Fed still fighting the last war, or the one before that, or a class war. But good policy it is not.
  • IRS heading towards total impunity.
  • Justice is in peril.
  • What a year...

Saturday, July 8, 2017

Who Are the Real Wealth Creators?

Technologists, of course.

Of the various indignities of the campaign last year, the economic ignorance displayed and accepted was particularly galling. The Trump voters of the hinterlands, supposedly angry about their compromised economic position, elected a party and person whose avowed goal is to take more money from our public institutions, the poor, and the middle class, and give it to the rich. This after a near-decade of total intransigence by the same party against restarting an economy that was floored in the banking meltdown and has been limping since. It has taken a decade to get back to more or less normal conditions- time lost to economic growth in general and to countless individual traumas.

Who and what creates economic growth? Is it the "job creators"? Is it Goldman Sachs? Is it the 1%? That is a big question facing the nation, both politically and in straight economic policy. The new administration says yes, yes, yes, arguing that giving the rich hefty tax breaks, not to mention reducing regulations of all sorts in financial and environmental sectors, will help economic growth. Will it? Obviously we have been through all this before, under G. W. Bush and Reagan as well. And the answer is no, it does not increase economic growth. Money going to the rich is money that is, largely, invested in low-risk assets like bonds and real estate.

More generally, does the managerial class create wealth by their organizational prowess? Is Amazon better than Staples, which is better than Pat's Stationery store down the street? Organizational differences make only minor advances in overall wealth, and seem mostly to facilitate the redistribution of labor earnings to ever fewer and richer capitalists. As previously discussed, the power of capital is that it always wins, through good times and bad, in every negotiation, since versus labor, it is always taking less risk.

What Amazon has that Pat's establishment does not is, mostly, new technology. The internet came along and showed that everyone could be connected, instantly. How about using that connection to sell things on a nationwide scale, especially things that are easy to ship? Sears would have been the natural founder of this franchise, based in their nationwide catalog roots, but they had become too invested in their stores to pay attention. Capitalists only deploy the technology that exists. They do very little to generate new technology- that is left to academics and the government. It is technology that keeps revolutionizing our lives and raising our standards of living- our collective wealth. And when it comes to distributing new technology, sometimes the market does a worse job than the government, such as with roads. We could have much better internet infrastructure if it were managed in the public interest as a utility.


Where would the "job creators" be without their cell phones? Where would they be without databases and spreadsheets? Where would they be without electricity? They would doubtless be riding herd over an estate of serfs. They would be just as wealth-creating in relative terms, but all in a much poorer society. The dark ages were dark not because entrepreneurs had lost their will to manage others, but because technological, scholarly, and governing instututional development ground to a halt with the dissipation of the Western Roman Empire. It took centuries of slow, accreting technological progress to make cities as large as they were in Roman times, and make societies as wealthy. By that point, the process took on a life of its own in the West as an ideology of Enlightenment and material and moral progress took hold, maintaining support for learning and innovation which reached unimaginable heights in the twentieth century.

Looking back, we can rue that the fuel of all this transformative progress and wealth creation has been buried reduced carbon, which as our waste product, CO2, is now befouling the biosphere. Our collective wealth has also begotten a vast and completely unsustainable increase in human population, whose many appetites are destroying much else of the biosphere. These are the problems of prosperity, and are, if we are morally responsible, now foremost in our public and private intentions and actions to transition to a sustainable as well as prosperous future.


  • Who needs clean water?
  • Who will sue on behalf of the public interest?
  • Free? We are not free. We are under the feudal thumb of corporations. "Likewise, the origin and success of the factory lay not in technological superiority, but in the substitution of the capitalist’s for the worker’s control of the work process and the quantity of output, in the change in the workman’s choice from one of how much to work and produce, based on his relative preferences for leisure and goods, to one of whether or not to work at all, which of course is hardly much of a choice."
  • Trump is the weakling.

Saturday, October 29, 2016

Better Than Nanites: Custom T-cells

Rather startling developments in the use of our internal maintenance cells to target cancer or other problems.

I am a watching a very nice science fiction series, about a motley crew in space who try to be kick-ass and all, but deep down are just ... very nice people. Because they are Canadian, of course! Every show seems to steal another plot from past classics, like the Bourne Identity, Star Trek Deep Space 9, and even one featuring Zombies.

One crew member is an android, (named "Android"), but is touched with a bit of schizophrenia, a la Commander Data or Seven-of-Nine or Spock, about the virtues of humanity and being humanely idiosyncratic. She also features nanites- apparently tiny machines in her high-tech body that run around and repair things when she takes a hit for the ship.

Android to android: another android, shooting the ship's Android. Repair will now commence.

Such nanites are quite a stretch, current technology having nothing remotely similar, and Android's body being rather inhospitable to anything running around among all the wires, metal, electricity, and whatnot. Such nanites would have to have some kind of master plan for guidance, which would be pretty difficult to fit into a nano package.

Yet our own bodies do have nanites, called the cells of the immune system. This system as a whole is an organ that has no fixed location or shape, but travels around the body in the blood stream, lymph and elsewhere between cells- anywhere where damage occurs. These cells have a highly complex communication system that finds damage, detects what type, cleans out the damage, attracts other helper cells as needed, reads the local developmental and tissue patterns to help local cells do the fix correctly, and gradually turns itself off when finished.

One of the central actors of this system are helper T-cells, which intermediate between the damage signals, which come from normal tissue as well as specialized cells that roam around looking for damage, and the inflammatory and damage repair system, such as cells that create antibodies (B-cells), or that phagocytose and kill infected or damaged cells directly (CTL cells, macrophages). Some T-cells activate immune system actions, and other T-cells dampen them, and they do this over the whole time course of the damage reaction. HIV is an infection mostly of T-cells, killing them and leading to the collapse of the whole immune system.

One of the magic properties of T-cells is specificity. Like the antibody system of B-cells, T-cells use genetic/genomic trickery to generate a galaxy of specific receptors, called, as a family, the T-cell receptor, which can recognize specific molecules, such as proteins from viruses and bacteria. Each T-cell generates and shows one such variant on its surface, and thus the right individual T-cell has to go to the right place to initiate its response, part of which is rapid growth and replication into an army of T-cell clones (do that, nanite!). There is also a process, carried out mostly in the thymus, which deletes all the newly-born T-cells whose specificity is against proteins from its own body rather than against foreign entities.

Given all this, it has been interesting to learn that the immune system often acts against cancers as well. While composed of the body's own DNA and cells, cancers can express various altered proteins due to their mutations and deranged regulation, and also may express stress molecules that tip off parts of the immune system that those cells should be killed. On the other hand, cancers can also, though natural selection, cleverly express other signal molecules that turn the immune system off, thus shielding themselves from destruction. That is a serious problem, obviously.

So many researchers have been casting about for ways to get the immune system to overcome such barriers and attack cancers in a more robust way, especially in resistant cases. And after a lot of false starts, these approaches are starting to bear remarkable fruit. Some are drug-based approaches, but more direct are methods that re-engineer those cells to do what we want.

Since they are travelling cells, T-cells can be taken out of the patient. This allows new genes to be introduced, mutations made, etc., especially using the new CRISPER technologies. One approach is to add a receptor specific to the patient's cancer, such that the refreshed T-cells target it directly, and get activated by the tumor environment, and start to resolve the tumor. This approach has been quite successful, to the point that some patients undergo tumor lysis syndrome- a somewhat dangerous consequence of the tumor getting destroyed too quickly for the body to handle the resulting trash.

A recent paper elaborated this re-engineering approach to make it far more broad. Researchers introduce not only a new receptor to direct the T-cells to particular targets, but a multi-gene system to perform any additional function desired in response to targeting, such as pumping out a toxin, or a regulator / activator of nearby cells. This promises to supercharge the T-cell therapy approach, beyond the native scope of action of normal T-cells, however well-targeted.

For example, in a demonstration experiment, mice were given tumors on two sides of their bodies, one of which contained an additional genetic marker- the fluorescent protein GFP expressed on its surface. This is not a mammalian protein at all, but from an obscure bacterium, and would have no effect, if the experimenters had not also engineered a batch of that mouse's T-cells to express a combination of new genes.

One was a version of the common protein receptor Notch, which had its cell-external receptor portion replaced by a receptor for GFP, and its cell-interior portion replaced with the transcription factor Gal4. When the exterior portion of Notch proteins are activated, the internal portion gets cleaved off and typically travels to the nucleus to do its thing- activate a set of responsive genes. The other engineered gene was a Gal4-responsive gene expressing a cancer-fighting drug called Blinatumomab. This is an antibody specific to a B-cell antigen, which is appropriate since the introduced tumor is B-cell derived.

Demonstration of tumor targeting with engineered T-cells; description in the text.

The synthetic receptor is shown in green (synNotch), exposing a GFP receptor on the outside and a cleavable transcription regulator on the inside. Upon encountering the GFP-expressing tumor (green), it activates transcription of an antitumor drug (custom antibody) abbreviated BiTE, which attacks cells expressing the cell surface receptor CD19, which these tumors do. The green tumor regresses within two weeks, while the control tumor does not.

The demonstration shows that this engineered treatment can address practically any target that can be specifically distinguished from normal cells (indeed, one can imagine multiple engineered receptors being used in combination), and generate any gene product to treat it.

It also shows the increasingly expensive direction of medical care. Not only is the expressed gene product one of those recently-developed, highly expensive cancer drugs, but the T-cell extraction, reprogramming, and re-introduction has to be done on a custom basis for each patient, which is likely to be even more expensive.


  • The NRA has a screw loose ... arm in arm with Wayne LaPierre!
  • Guess which constitutional amendment is the most important?
  • Smoking still at fault for 30% of cancer deaths ... after all this time.
  • We are in deep CO2.
  • Financial regulation works.
  • The disorder has a name.
  • And a bitter end is in sight.

Saturday, September 17, 2016

The Desktop is Dead

Stick computers like the Chromebit are part of our future- tiny, portable, cheap.

As someone who consults frequently on personal computer issues, it was interesting to hear about a new form factor- the stick computer. The leading example is the Chromebit. Chrome books are better known- petite laptops that give you a Google Chrome browser that is a portal to the whole web, including a series of web apps and soon, android apps as well. The Chromecast product is also better known, as a tiny computer that lets you channel WiFi streams into your TV, using a phone as a remote controller.



The Chromebit is a bit of each, with a small size of the Chromecast, a bargain basement price of $85, and the computer capabilities of a Chrome computer. Like the Chromecast, it plugs into an HDMI port on any TV or monitor. But it turns that screen into a computer, given that web apps such as mail, docs, and storage now allow one to work entirely online, including cloud printing. Storage is negligible, so everything has to go to Google drive or some similar online service. Likewise, connectivity is minimal, with one USB 2.0 port- enough for connecting a camera in a pinch, or a keyboard or mouse, though these should be bluetooth. Naturally, you have to be online to do anything with this device.


Intel and Android also offer stick computers. At $150, the Intel stick is a fully stocked Windows 10 computer, though with only 32G of storage. Android sticks do not offer full computer capability, being restricted to apps, like a tablet, but these capabilities obviously run quite a gamut, from skype to web browsing, voice control, and millions of other programs.

For a person on a tight budget, these computers are an impressive way to get online with minimal expense, and one can use an existing TV to save even more. A full system would run something like...

$85 - computer
$15 - bluetooth mouse
$30 - bluetooth keyboard
$70 - cloud-compatible printer (optional)
$100- monitor with HDMI (or use existing TV)
=====
$300

This is impressive from a budget perspective, but it also indicates something about the future. One can imagine a world where our phones act as the computers behind everthing we do, which we can plug into dumb screens wherever we want, turing them into secure, full computers. Whether the applications also reside in the cloud as Google is working towards, reducing reliance on any local computing power, is uncertain. This depends only on slightly faster network connections than most of us have today to make fully animated clients driven almost entirely from distant sources. How much we can trust those corporate, centralized sources in an always-connected ecosystem to serve us faithfully is, naturally, another question.



Meanwhile, while we are on tech issues, to power all these bluetooth devices, rechargeable batteries are the sustainable and cost-effective solution. Charging such batteries can be tricky. It pays to use a smart charger that operates not just on a timer as most chargers do, but by sensing the status of the battery.


  • Gerrymandering has brought us a crazy, unrepresentative House. There ought to be a law, right?
  • Both retrograde forces in the Arab world have more power (and arms) than the progressives.
  • Crime pays.
  • It's not easy being the only super-power.
  • Republicans fight in the gutter. Then they call others "crooked".
  • A lesson in psychological projection.
  • Bill Moyers and the Housewive's Rebellion.
  • People with a modicum of compassion, vs Trump.
  • What's going on in Puerto Rico?
  • Over a 1 million-mile lifetime on the roads, you have a 1 in 90 risk of dying in a crash. That is not good enough.
  • Annals of waste, pork, and fraud.
  • Health care markets still don't work very well.
  • Recidivism from Guantanamo.