Category: Software

Downloading a video from an ebay listing

Using Firefox, go to the item page, open the Firefox Web Developer Tools (Menu -> More tools -> Web Developer Tools). Click on the Network tab in the Tools section, then on the web page click on the video and play it.

In the Network tab, requests for audio_128kb-0.m4s to audio_128kb-16.m4s appeared, and video_720p-0.m4s to video_720p-16.m4s. I copied the URL for the video and audio requests (all the same but with a different -0 to -16 segment), and used wget to download the files. Each was 1-2 MB:

wget https://video.ebaycdn.net/videos/v1/8f1e79501860a64d9e245434ffffec91/5/video_720p-0.m4s

After 32 wget commands, the entire video was present. I downloaded segments from 0 up until after number 16, I got a ‘not found’ message letting me know I had the last segment.

Then I concatenated the pieces together:

cat video_720p-0.m4s >> video_720p.m4s
cat video_720p-1.m4s >> video_720p.m4s
...
cat video_720p-16.m4s >> video_720p.m4s

And the same for the audio segments. I put the cat commands into a batch file “cat.txt” and ran them using “bash cat.txt”.
Then ffmpeg was used to combine them and convert to mp4 format:

ffmpeg -i video_720p.m4s -i audio_128kb.m4s -c copy ebay_720p.mp4

Using cron to mute sound in Ubuntu 20.04

I wanted to turn off audio at night automatically using cron.

I saw suggestions to use amixer:
export DISPLAY=:0 && /usr/bin/amixer -D pulse sset Master,0 0%
but this gave an error:

ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection refused
amixer: Mixer attach pulse error: Connection refused

This works, add this line to /etc/crontab:

* 23<tab>* * *<tab>jiml<tab>DISPLAY=:0.0 pactl --server unix:/run/user/1000/pulse/native set-sink-mute @DEFAULT_SINK@ true

and restart cron:
service cron restart

jiml is the user with the open desktop.
‘1000’ is the uid of user ‘jiml’, this can be found by:

ls /run/user
or
id -u jiml

and restart cron:
service cron restart


jiml is the user with the open desktop.
‘1000’ is the uid of user ‘jiml’, this can be found by:

ls /run/user
or
id -u jiml

STAR-Fusion

STAR-Fusion is a program that detects RNA fusions events in RNA-Seq data. According to the paper describing the program, STAR-Fusion is much better than the dozen or so other callers under active development.

Still, reading the paper left me with some questions. As descried by the authors, STAR-Fusion is not just a good caller, but the best caller by a wide margin. See Fig 3A. The next nine best callers have AUC values of 0.5 to 0.3, but STAR-Fusion has a value of 0.8 in the author’s testing.

And what is the source of this incredible result? The authors are silent on the subject. They don’t know, or perhaps didn’t notice how remarkable their achievement is, and so don’t remark on it. The description of the STAR-Fusion algorithm seems very similar to the algorithms used by every other RNA fusion caller. Some do better than others, so details of implementation must matter.

So what is the critical advance STAR-Fusion makes? Is better sequence alignment key? Is the filtering approach? The paralog handling seems like it cuts down on false positives, is this key? Discovering the critical factors for RNA fusion calling would be an important result.

Or are the performance results in the paper dependent on the synthetic test data set the authors use? Will subsequent papers comparing STAR-Fusion to other methods find that it is only average, or sub-par?

Saving streaming audio on Linux

MONITOR=$(pactl list | egrep -A2 ‘^(\*\*\* )?Source #’ | grep ‘Name: .*\.monitor$’ | awk ‘{print $NF}’ | tail -n1)

goes to alsa_output.pci-0000_00_1b.0.analog-stereo.monitor on my system

LAMEOPTIONS=’ -s 44.1 –preset cbr 192′
FILENAME=foo.mp3

Record currently playing audio:
parec -d $MONITOR | lame $LAMEOPTIONS -r $FILENAME

Split into separate 1 hr files with a 3 sec overlap:
ffmpeg -ss 00:00:00 -t 01:00:03 -i foo.mp3 $1.01.mp3 -acodec copy
ffmpeg -ss 01:00:00 -t 01:00:03 -i foo.mp3 $1.02.mp3 -acodec copy

Commands from here

App game idea

Flip it

This game board is an array of tiles. The tiles have letters. The game play involves flipping a pair of letters, as if the two tiles can move through the screen on the axis that connects them. In any case, they move switches them. The goal is to rearrange the tiles to spell words.
Move:

cat --flip c:a--> act
dog ------------> dog

cat --flip c:d--> dat
dog ------------> cog

The game can be played different sized boards, and with boards with cutouts.
Variation 1: Have the tiles have both color and a letter, to distinguish common letters.
Variation 2: Have the tiles be two sided, so that flipping them exposes the other sides.

What is interesting about this is that it is a class of games easy to implement in the computer but which is hard or impossible to implement as a physical game. There is a whole class of variations on pen and pencil or board games that haven’t been tried because of this!

Postscript site

Here’s a site with a good Postscript library for drawing variable width lines.

To add a rounded end I added an arc command after the bolt function:
20 setlinewidth
newpath 0 0 moveto
0 -50 200 -150 200 0 rcurveto
currentpoint
reversepath bolt
5 0 179 arc
fill

I used two of these curves to draw a stylized pear.

L-system Iterator

I’ve put up a web site for exploring L-system images, L-system Iterator.

Well known L-systems

The snowflake shape is only one example of the pictures that can be drawn this way.

L-systems are simple iterated drawing rules. Simple rules for turning and drawing put together in this way make quite interesting and complicated patterns. The ones shown above are well known. From the the left, the Koch snowflake, the Sierpinski triangle, a kolam-like image, and a plant-like image. On the second row, the Heighway dragon, the Hilbert curve, and another plant.

The Heighway dragon has many interesting properties–for example, it can be tiled over the plane.

Some iterated objects are fractals–the Koch snowflake, Sierpinski triangle, and Hilbert curve are famous simple fractals.

L-systems can be quite complicated. The systems modeled on my web site use a single rotation angle and only one line width. More complicated models can make surprisingly realistic plants. Prusinkiewicz and Lindenmayer (the L in L-system) have developed detailed plant models.

The web site is based on the Perl code I wrote for my Biomorph evolution/selection web site. The images are generated using Postscript to draw the L-system, and then the ImageMagick convert program to change it to a PNG image. Images are given a file name that describes the L-system, effectively caching the image. The Prototype Javascript library is used to assist in making the popup boxes.

Each L-system variant has two changes from the current L-system. Some logic is used to keep the L-system in the same family–if there’s no Y equation, one isn’t added. Existing equations are grown or shrunk but not dropped. These images can take much longer to generate than the Biomorph images, so a number of limits are placed to keep the L-system from getting too complicated or taking too much CPU time.

The hardest part of the Postscript was getting the images scaled and centered appropriately. The images can extend in any direction and some are large, others small. The centering code generates the image twice, once to find out its dimensions and then a second time scaled and centered. Here’s the code to record the dimensions of each part of the image. It gets called before each stroke operation.

/max_path {
gsave initmatrix pathbbox grestore


ur_y false eq { /ur_y exch store } { dup ur_y gt { /ur_y exch store } { pop } ifelse } ifelse
ur_x false eq { /ur_x exch store } { dup ur_x gt { /ur_x exch store } { pop } ifelse } ifelse
ll_y false eq { /ll_y exch store } { dup ll_y lt { /ll_y exch store } { pop } ifelse } ifelse
ll_x false eq { /ll_x exch store } { dup ll_x lt { /ll_x exch store } { pop } ifelse } ifelse
} def

The ‘initmatrix’ command is required to reset things because of all the rotation operations.

The code for the site is linked on the L-system iterator home page.

Update: Added color variation as an option. And a reverse direction primitive.

Also, the code now runs under mod_perl.

Note for mod_perl users–mod_perl 2.0 has no way of handling alarms. select() doesn’t work either as a way of timing out pipes. The only usable method is prepending commands with ‘ulimit -t secs’ and letting the shell limit the system process.

To make the split color B&W images I used these ImageMagick commands:
convert -size 150x150 tile:color.png tile:bw.png ../temp/mask.png -composite split.png
using a half black, half white split image as the mask.

Then added the split line using:
convert -size 150x150 -fill white -stroke black -draw "line 0,0 150,150" split.png split_line.png

Putting video clips on my blog

Putting video clips on my blog was a bit harder than I expected. I first tried converting the .MOV files the camera writes into .avi files using ffmegX and posting them using <embed> tag code. The files showed up great on my Mac but not at all on a Windows computer.

Flash format files, .flv, are the easiest cross-platform way to post video clips. Flash does require that the site supply a Flash player. There are many Flash players available. I tried OS FLV and it worked nicely.

To edit video files I used avidemux, then ffmegX to convert them to .flv, and I put them on the site using the ‘noscript’ <embed> code suggested by OS FLV with the OS FLV player.swf.