True's beaked whale.jpg

Western spotted skunk

Hooded skunk

Yellow-throated Marten

Wolverine

Archive for the ‘General’ Category

An upside for AI / LLMs

Sunday, February 11th, 2024

The new AI / LLM tools have many potential applications, but many of them will have downsides for some people–replacing copy writers, some tech support, these are benefits for the companies that apply them, but many of today’s jobs in those areas will be eliminated.

One application that seems inevitable and all positive is raising the floor for human performance. An tool that you can ask for advice, or better yet an AI tool that monitors you, by email, or watches internet use across devices, and/or is watching on video, understands what the user is doing, understands the context, and provides advice will help people avoid mistakes. While these mistakes are not obvious to the person making them, they are obvious to a person with experience, or able to research the problem. Imagine a person playing chess alone vs. someone playing with a chess program to flag potential mistakes, with the general knowledge of an AI allowing it to work in many more situations.

This AI tool can develop slowly and in a modular fashion, will be useful even in crude form, but will become revolutionary once it gets good enough. Imagine a person using a crude form of this as an interactive chat tool. The person could say, I’m taking a vacation to Greece and get advice on things to do, what they need to know about currency or visas. Or imagine a more advanced AI would remind a person of an appointments, or tells them they need to change the house air filter. If a person was goiing to make a poor decision–routinely using check cashing places or buying a car with poor service record, the AI could warn them.

This AI tool would be able to slot in special modules as needed. A person starting a business could get localized advice on the steps to take. Someone buying a house could get advice on things to check, and a new homeowner could get advice on what to check and repair and reminders for maintenance.


Bruce Schneier’s “AI and Democracy” talk at Capricon 44

Sunday, February 4th, 2024

Bruce Schneier talked at Capricon about fifteen ideas he had on AI that are forming up into a 2024 book. One thing he mentioned is that AI would make lawsuits much cheaper to launch and carry out, and multiplying the number of lawsuits would mean that courts would need to adopt AI adjudication to adapt to this. Bruce passed over this pretty quickly, but I think this will have early and pronounced effect on society.

It looks like legal work is a problem AI will be able to solve soon. That is, AII tools will be able to contribute effectively to the process of filing and carrying out lawsuits. This is not one problem, but a set of related problems that AI will soon be able to do effectively. Given a set of facts and objectives, an AI will be able to determine what type of lawsuit to file, write it up in the proper jargon and format suitable for submission, determine and write a response to opposing counsel motions, summarize and prioritize discovery material, etc. A lot of legal work is routine, repetitive, and very similar to previous cases. Really, a perfect problem for AI.

The immediate upshot is that a lawyer using AI tools will be able to do much more legal work, work faster, and lawsuits will be much cheaper to launch. The short-term impact is that the number of lawsuit filed will go up multiple-fold and this will crash the courts. Gum them up. Bring things to a standstill. US courts are operating at capacity already and can’t handle more cases.

There isn’t any way for courts to prevent this. The lawsuits will be filed by lawyers at established law firms. Lawyers will use AI as a tool, review AI written suggestions and briefs, and from the court’s perspective these lawsuits will look just like the existing lawsuits, there will just be many more of them.

In the long term, it will make sense for judges and the courts to adopt AI tools to accelerate their end of things, but this will require new laws. New laws means years of hearings, discussion, negotiation, etc. Government functions require deliberation and consideration before making big changes. And who will develop AI tools for courts? The market is smaller and more uncertain than the market of making these tools for private law firms. And judges are very conservative, notoriously slow to act, to react, to adopt new technology.

So AI-assisted lawyering will hit the courts at some point in the next few years, but it will take a decade or more for the courts to effectively react.

Limit Firefox memory

Saturday, November 11th, 2023
  1. Open Firefox, go to about:config.
  2. Go to browser.tabs.unloadOnLowMemory, set it to true.
  3. Go to browser.low_commit_space_threshold_mb, set it to 2/3 or 3/4 of total memory on your computer. (e.g. 32GB -> 24000).

Downloading a video from an ebay listing

Wednesday, March 8th, 2023

Using Firefox, go to the item page, open the Firefox Web Developer Tools (Menu -> More tools -> Web Developer Tools). Click on the Network tab in the Tools section, then on the web page click on the video and play it.

In the Network tab, requests for audio_128kb-0.m4s to audio_128kb-16.m4s appeared, and video_720p-0.m4s to video_720p-16.m4s. I copied the URL for the video and audio requests (all the same but with a different -0 to -16 segment), and used wget to download the files. Each was 1-2 MB:

wget https://video.ebaycdn.net/videos/v1/8f1e79501860a64d9e245434ffffec91/5/video_720p-0.m4s

After 32 wget commands, the entire video was present. I downloaded segments from 0 up until after number 16, I got a ‘not found’ message letting me know I had the last segment.

Then I concatenated the pieces together:

cat video_720p-0.m4s >> video_720p.m4s
cat video_720p-1.m4s >> video_720p.m4s
...
cat video_720p-16.m4s >> video_720p.m4s

And the same for the audio segments. I put the cat commands into a batch file “cat.txt” and ran them using “bash cat.txt”.
Then ffmpeg was used to combine them and convert to mp4 format:

ffmpeg -i video_720p.m4s -i audio_128kb.m4s -c copy ebay_720p.mp4

The Solar roof

Sunday, December 18th, 2022

David Brin, in a comment on his blog describes Elon Musk as a ‘successful innovator’ rather than an investor or government subsidy truffle pig. Brin seems to be under the impression that Solar City “put up 2 million solar roofs”.

As best I can find, Tesla has only installed a few thousand ‘Solar Roofs’. Electrek reported in 2022 that Tesla was doing 23 installs / week, and was pausing installations. Tesla started mass market deployments of the product in 2020.

Tesla bought SolarCity in 2016. SolarCity does mainly ordinary solar panel installations, and Tesla uses combined figures to make it seem like the ‘Solar Roof’ product is more successful. The Tesla ‘Solar Roof’ costs several times more per watt that ordinary solar panels, and doesn’t make economic sense.


What will it take for an AI to be a person

Sunday, September 4th, 2022

What qualities will make an AI a person?
-General intelligence, not just a special ability to solve a particular class of problems.
-General ability to learn from interacting with the environment.
-Can communicate with people.
-The AI needs a sense of self, needs to see itself as a person.
-General ability to reason abstractly, reason about problems in general.

The various types of machine learning that exist today can and likely will be a part of a human-level AI, but as a module or subcomponent that gets applied to learning tasks. Another level of AI will need to exist on top of that, applying general knowledge storage, modeling / conceptualizing problems, dealing with overarching direction and goals.

Multi-color 3D print head idea

Tuesday, September 28th, 2021

Saw this paper, “Voxelated soft matter via multimaterial multinozzle 3D printing“, pdf. Two or more fluids come together at bend, and static pressure is enough to keep the current printing liquid moving towards the outlet, not backing up into the second material source tube. And the pressure of the current print liquid keeps the other fluids back.

There is effectively no mix chamber, so the change from one fluid to the other is quite quick, and there is little mixing after a switch.

This works because of the size and orientation of the fluid tubes in relation to the viscosity and other properties of the liquids. The authors make the print heads out of plastic and print with silicon and wax.

To use this for 3D printing plastic, the print head should be made out of a material with better heat resistance, such are metal or ceramic.

Idea
Make a print head like this out of ceramic (alumina, or similar ‘technical ceramic’). 1) 3D print the flow chamber and nozzle geometry out of a thermoplastic (or wax), then 2) slip cast ceramic around this. 3) When the ceramic is fired, the plastic will melt out or vaporize, leaving the desired nozzle geometry.

Idea 2
The geometry needed is simple, at least for two inputs. The thin join can be a very short segment, a few mm in length. The lead in tube can be drilled 2-3mm wide, then the 0.5 or 0.25 mm join tubes can be drilled out. Drill the outlet from the bottom, then drill the inlets from the bottom of the lead in holes. This would require precision to make the segments join up correctly, but the drill holes would be short.



Using cron to mute sound in Ubuntu 20.04

Wednesday, August 18th, 2021

I wanted to turn off audio at night automatically using cron.

I saw suggestions to use amixer:
export DISPLAY=:0 && /usr/bin/amixer -D pulse sset Master,0 0%
but this gave an error:

ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection refused
amixer: Mixer attach pulse error: Connection refused

This works, add this line to /etc/crontab:

* 23<tab>* * *<tab>jiml<tab>DISPLAY=:0.0 pactl --server unix:/run/user/1000/pulse/native set-sink-mute @DEFAULT_SINK@ true

and restart cron:
service cron restart

jiml is the user with the open desktop.
‘1000’ is the uid of user ‘jiml’, this can be found by:

ls /run/user
or
id -u jiml

and restart cron:
service cron restart


jiml is the user with the open desktop.
‘1000’ is the uid of user ‘jiml’, this can be found by:

ls /run/user
or
id -u jiml

Carbon capture

Sunday, March 3rd, 2019

The basic problem with carbon capture is energy, and energy is cost. When coal or oil is burned, heat and CO2 are produced. CO2 is a pretty low energy form of carbon. Turning it into something solid (calcium carbonate, graphite or coal) requires a lot of energy. Also, when CO2 is made by burning fossil fuels it disperses, and re-concentrating it requires energy. That’s why carbon capture proposals often include using exhaust gas, grabbing the CO2 before it disperses. The other main type of capture I’ve seen proposed takes the CO2, concentrates it to high pressure, and pumps it underground (and hopes it stays there). Compressors take a lot of energy, and so do pumps if the CO2 needs to be piped hundreds of miles to a place where it can be pumped underground.

The key number for carbon capture is, how much energy is required relative to the amount generated by burning the fossil fuel? I’ve never seen articles about it touting this number. A quick look shows one assessment being 30% – 35% of the energy (Zhang et al, 2014), another figures the production cost of electrcity with carbon capture being 62% – 130% higher (White et al, 2012, Table 6) Another article looks at the harder case, CO2 capture from air, and estimates the cost at $1000/ton CO2 (link). Burning the coal to generate a ton of CO2 (1/3 of a ton coal) generates about $80 of electricity.

So the best case cost of carbon capture–from power plant exhaust gas–is dismal, 25%, 75%, maybe over 100% of the value of the electricity. This number will translate directly to increased fossil fuel energy costs (+30%, +100%, etc.) if fossil fuel companies are required to capture the majority of the CO2 pollution they generate.

All the carbon capture projects are basically stalling actions. The fossil fuel companies pay small $$ to put together a pilot plant (or better yet, get the govt to fund it), run tests for years, but never implement CO2 capture on a coal or gas energy plant. This had been a very successful approach for the fossil fuel industry, they’ve managed to stall things for 50 years already!

CRISPR

Tuesday, February 26th, 2019

The CRISPR gene editing system is a major technical advance. It does open up the near term possibility of making a few small changes to a human embryo’s DNA, but I don’t find that particularly interesting or alarming.

What makes CRISPR better than previous tech for gene modification is that it works at high efficiency–1% to 60% with very high specificity. I read a recent paper testing CRISPR on human embryos that reported 50% effectiveness. Given a handful of embryos to work with, there is a very good chance of making a single change in one embryo.

We have very little knowledge or technology for making positive changes to animals which is a huge limitation to genetic ‘engineering’. Mostly what is understood are disease causing (or predisposing) genetic variants. So a single change (maybe in a few years, a handful of changes?) can be made to a human embryo. There are other limits to modifying human embryos apart from lack of knowledge. The more time an embryo or human embryonic stem cell is cultured, the more it is manipulated, the greater the chance of something going wrong, and the child being born with problems. This tech is great for manipulating animals in the lab. If many or most of them have the genetic change, great! If some are born with defects, cull them, or breed another generation and use those in experiments (often the first generation has non-genetic defects that breed away). But these are huge problems if you are working on humans, because things that increase the risk of getting a damaged child are not desirable.

Long term (100-1000 years), when increases in understanding of biology make improvements (or significant changes of any sort) in humans possible, I think what we’ll see is that the people with the least concern for child welfare will be the most willing to experiment on them.

The really exciting possibilities CRISPR opens up is in genetic treatment of human disease in the tissues of kids and adults. There is delivery tech (well tested viral vectors, and a host of other methods) that can get CRISPR into a good percentage of cells (10% to 50+%) in many tissues, and once there, CRISPR will edit a good fraction of those cells. For many diseases, fixing a genetic defect in 1%, 10% or 20% of cells is enough to treat the disease, so genetic treatment of host of diseases is now possible. Things like hemophilia, some muscular dystrophy, maybe Huntington’s Disease, metabolic diseases, Parkinson’s disease, and on and on. There will be a lot of exciting advances turning that ‘possible’ into actual treatments for different diseases over the next decade or two.

The other major effect of CRISPR tech is that it makes animal experimentation faster and cheaper, and will accelerate basic biological research. We still don’t know what the majority of indivdual genes do, let alone how they work in complexes and networks in cells.