Skip navigation

Ben Goldacre just posted audio of the Windows MSPaint executable interpreted as a wave file.


Windows 7 x64 MS Paint EXE Interpereted as PCM Data by R2Bl3nd

 

Near the end, the amount of repetition is crazy. I was wondering why they don’t compress these files, but on closer listen, the sound modulates kind of like through a wah pedal… so the numbers are indeed different even though the rhythmic patterns are similar.

This makes me wonder about a whole new compression scheme based on the “rhythm” of numbers in code, instead of the straight text patterns like ZIP and RTE. Does anyone know of a compression scheme that could handle that?

We recently had a music video project with four screens simultaneously, where each screen was the domain of a different editor. To coordinate everyone so I could combine their separate scenes into a 4-screen layout easily, (make sure everyone was cutting on the same beat, divvy up the scenes, etc), I decided to share a single FCP project file with the whole group.

All the editors were working from Macbook Pros, so we needed a low-res, quick-and-dirty editing codec so we could render as fast as we can while making those edits. I decided to use the OfflineRT codec as a proxy: we would edit using the fast, low-res clips, then do the final render with the hi-res original versions. Apple recommends using their new ProRes codec for editing, but it took up so much filespace (more than the original clips!), that it was better to transfer the tiny OfflineRT files to everyone.

Since we filmed the video with a motley camera setup with different framerates, resolutions and codecs (one Canon 5D, one Nikon D90, and several tiny Kodak PlaySports), it was quite a pain to import them all into Final Cut. Final Cut is not designed for the consumer cameras we had to work with, so you have to hack it in several places to get it to work.

First of all, the Log and Transfer function that is supposed to make it so easy never recognizes any of our cameras’ files. If you rename them, you will confuse FCP to the point of futility. Here are the steps I took to make a decent proxy editing setup:

  1. The 60fps footage that the Kodak PlaySport records is unrecognizable by Final Cut. Use MPEG Streamclip to convert the 60fps clips to the ProRes 422 codec (720p @ 59.97fps).
  2. Use MPEG Streamclip to convert the rest of the footage (24fps, 30fps, etc) to the OfflineRT HD codec (Apple Photo JPEG [384x216]):
      
    Unknownname

    Yes, you can also transcode the footage to OfflineRT using the Media Manager in FCP itself, however it will take a LOT longer. (FCP: estimated 36 hours. MPEG Streamclip: 45 minutes)

  3. Make sure these files are in separate folder called “OfflineRT”. The original clips + converted 60fps clips will be in a folder called “Originals”. Make sure they have a similar directory structure and identical file names.
  4. Make a new project in FCP and import the original files from “Originals” folder into it.
  5. Place any beat markers, cut markers, etc on the audio file so everyone is cutting on the same beat.
  6. Create a new sequence with the Apple ProRes 422 1080p 30p settings and add the audio file to it.
  7. Save. This is your original project file. Make a backup if you want.
  8. Now you will create an “offline” version of the project where all the original, hi-res media files are disconnected. You will then reconnect them to the low-res files for faster proxy editing:
  9. Highlight all the files and sequences in your project. Right click and choose “Media manager”.
  10. Choose “Create Offline”.
  11. Set sequences to “OfflineRT HD” – same as the clips you converted earlier in step 2.
    0unknownname
  12. It will ask you to save your new project file with the low-res footage. It will resize everything for you (if it feels like it. Prepare for some fiddling with the scale for the upconversion later).
  13. Now you can send the project file to someone with the low-res OfflineRT clips, and they can reconnect them on their own drive to make quick edits:
  14. If you just opened the file, it will tell you the files are missing. Forget the render files but find the clips. Or cancel that window, highlight all your clips and sequences, right click and choose “Reconnect Media”.
  15. Now is where the trick comes in: you will point FCP to your low-res OfflineRT clips so you can proxy edit with near-instantaneous render times:
  16. Choose a new “Search single location” folder – point it to the “OfflineRT” folder.
  17. Click “Search…”
  18. It will hopefully find the files that are named the same. Make sure “Reconnect all files in relative path” is checked and click “choose”.
  19. It may warn you about in/out points. Hopefully this won’t matter but be wary anyway for Murphy’s sake. Click “Continue”.
  20. You may have to repeat steps 16-19 until all the files are found. For some reason it doesn’t always find them the first time.
  21. Edit away!
  22. Save the project file, share it, etc.
  23. Now you can email the .FCP file back to your master rendering machine. 
  24. Repeat steps 8-12 to create a new offline project, but this time setting sequences to the Apple ProRes 422 codec at 1080p to match your originals.
  25. Repeat steps 14-20 to reconnect the media clips, but this time to the hi-res “Originals” folder.
  26. Now you can render your project from the originals at full quality! Why isn’t this easier?

I have been trying to get an Android phone to work with LSD, the VJ application I wrote for HTML5 (see my earlier LSD post), so I can hook it up to my pocket projector and have a handheld VJ device for street and guerrilla performances!

However, despite the Android 2 feature list boasting it supports HTML5 video, it has a lot of holes in the implementation. I found this great post from Peter Gasston on making your video work on Android phones, which is a big help. I’ve implemented his recommended workarounds, like removing the “type” attribute on the source tag and manually playing the video with Javascript. If it works, this link should play a test video that is known to work on Android. Then we will see if it actually loads into the canvas, or if Android only supports playing one video at a time, fullscreen in the media player. I guess iPhone is ahead in this regard, but I still have yet to see it play all videos at once.

iPhone: 1 / Android: 8,543,341,231,324,212,378

I just spent a weekend coding a little proof-of-concept VJ application to test the live video mixing capabilities of HTML5, called LSD (Layer Synthesis Device). You can use LSD to VJ video clips on the web! Choose video clips and images and blend them together using the mixer controls or the interactive mouse mode. Create your customized hallucination directly in your browser and share with your friends!

Picture_7

I wrote LSD to try out the new video and 2D rendering capabilities of HTML5 and the Canvas element (and also to prove to myself that HTML5+jQuery is SOO much easier to work in than a language like Max/MSP or Processing). It worked surprisingly well. Use it on a fast browser like Safari or Chrome and you will see UI responsiveness and smoothness comparable to professional VJ software (eh, at low resolutions). Of course, the browsers still have a long way to get up to the speed of a native application, but it is a promising start. (You can even VJ on your phone! Come on, this must be the future already!)

Try LSD Now!

Supported on Firefox 3.5+, Safari 4+, Chrome, iPhone, Android (no IE, what a surprise…)

All the VJ clips and images are from my personal collection, licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. The code is licensed as open source under the GPL, so feel free to play with the code and share it!

Just saw Ink last night. Seems like one of those movies where you either love it or hate it; for me it is my new favorite movie! You will never see a movie like this come out of Hollywood – it throws their rules away and builds its own unique visual language, a majestic poem amongst the regurgitated prose of cookie-cutter movies today . It thrusts you into its tale in a beautifully multi-layered, metaphorical way, and the photography and low-budget special effects are dazzing (even if he went a little overboard on the post-processing filters – Winans thinks like a VJ, using colors and filters to alter the mood of the scene or signify a change in worlds – and it works really well!)

It is the first movie I’ve seen where the fabula and the sujet actually follow the same linear progression (mostly), even though it doesn’t seem that way for most of the movie. Winans fools you into thinking it is a Nolan-esque sujet with the ending at the beginning, but then the ending totally twists it around into a Zen-like cyclical plot which leads to the most powerfully moving character development I’ve ever seen.

The acting is slightly flimsy at the beginning but strengthens quickly. Keep watching! It will all make sense. The characters, the costumes, the cute low-budget props and the fantastic ethereal world Winans creates has all the markings of a cult classic. This is the first movie I’ve actually wanted to physically own for a long time.

Watch Ink on Netflix!

I just made this rather Carebear-colored painting straight in my browser, no Flash required, with an HTML5 app called Harmony! It was nice and fast too in Firefox. Mr. Doob does it again, with his pure HTML5 + Javascript genius!

Seeing this makes me want to bring back my master’s thesis, Doodler. It was a social network based on drawing games, kind of a mash-up between Harmony and Facebook. I originally programmed it all in Flash, but it never really progressed past the beta (mainly for my pure hatred of Flash, secondly because I graduated and it was TIME FOR SUMMER! WOOO!). An HTML5 implementation though…. hmmmm…

Picture_2

At the Big South Lab, where I work, Andy Zingerle and I have made a VJacket: a wearable controller for live video performance. Built into this old bomber jacket are all kinds of sensors to control visuals on the screen: hit sensors, light sensors, bend sensors and touch sliders. This way, the VJ is freed from the boring, cumbersome interface of mouse and keyboard, and instead can use the very clothes on his body to control the videos and effects with a precise dance converting convulsing limbs into luscious light shows. We are transforming this bomber jacket, a symbol of war and destruction, into a tool of creative expression and a symbol of peace. We are also going to release all the related hardware and software as open source in order to spread this transformation across the globe.

The VJacket uses a standard Arduino microcontroller board to relay the sensor data to the computer. To take it from there, we built the Arduino2OSC bridge: an easily configurable graphical interface that creates customizable OpenSoundControl messages from the sensor data. It also allows you to adjust the analog input data from the Arduino to your exact needs – scaling input and output values, adding cutoff thresholds, etc. – with enough options to (hopefully) cover all your Arduino input requirements: no matter if your sensor is a continuous slider or a one-hit piezo contact mic, no matter if you are manipulating a video effect or triggering audio samples, we tried to make it flexible enough so you’re not stuck reprogramming a new patch for every project – just make a new preset and you’re done!

For the above video demo, we used the VJacket through Arudino2OSC to send OpenSoundControl messages to Resolume Avenue, a popular VJ program. The Arduino2OSC bridge interface is generic enough to send any type of OSC message to any program that accepts them, including other video or audio programs like Arkaos Grand VJ, Max/MSP/Jitter, Kyma, etc. You can even send the messages over the LAN for networked performances!

We will soon make available the circuit designs, Arduino code, and Arduino2OSC Max/MSP patch/application – all under an open source license – so stay tuned to make your own VJacket!

Photojojo is inciting a debate on whether they should buy a Canon or Nikon DSLR for HD video. Now I can’t give an unbaised answer, since I’ve only had experience with one brand, but as an avid Nikon user, I would recommend a Canon. I’ve never tried HD video on one, but I bought the Nikon D90 when it first came out and although it can make some awesome videos, there are several SERIOUS shortcomings:

  1. No autofocus while recording. I’m not sure if the Canon can quite pull this off either from reviews I’ve read, but at least it TRIES! Nikon doesn’t do anything!
  2. The scanline problem: If you make any fast pans, the image pans in chunks down the screen because the image sensor can’t update fast enough to get the same shot in one frame. Interesting effect sometimes but usually it just ruins your videos.
  3. No manual exposure control (besides a slider that you can “suggest” to the camera to under/overexpose about 1 stop). If you have a manual aperture ring then you may be able to control it a little, but the camera automatically adjusts ISO to keep the exposure in range. There’s (almost) no way to disable it.
  4. Consequentially, if you’re in anything besides daylight conditions, the ISO goes up and the graininess can get really bad.
  5. Video aside, Canon also uses a more-or-less standard RAW format. Nikon actually ENCRYPTS their RAW files which makes it a PAIN to get into Photoshop, Lightroom, etc. Adobe is usually a little slow on updating their Camera RAW drivers to decrypt Nikon’s files, especially if you have an older version of PS or LR. Who ever heard of DRM on YOUR OWN PHOTOS!?!? (insert information-wants-to-be-free rant here)
  6. Another thing with DRM: Canon’s shutter-release cord is a standard 1/8 inch headphone jack, which makes it extremely easy to hack and make your own intervalometer (something I wanted to do with my Nikon), but Nikon’s is a crazy proprietary plug! Doh!

Note: I’m not sure if Nikon has fixed these faults in their new HDDSLRs after the D90, or if Canon has the same faults, so just try them out to see if they’ve gotten better!

I just installed TextWrangler (the free version of BBEdit), and am SO GLAD to kiss Aquamacs goodbye.

TextWrangler is so great, it even has a built-in diff comparison and merge tool. The REALLY cool thing is that you can use it as an external diff program with Git!

Here’s how (thanks to Jotlab’s post on How to use FileMerge with Git as a Diff Tool on OSX for giving me the idea):

~/git-twdiff.sh:

#!/bin/sh
/usr/bin/twdiff --wait "$2" "$5"

Then, just tell Git to use your script as the external diff editor:

git config --global diff.external ~/git-twdiff.sh

At my office, the ISP blocks all SMTP servers except for theirs… problem is, for some reason my account can’t send email! So I have to end up using webmail (ick), or hack my way around it. It’s very handy to have an SSH server or two lying around. If you do, you can just create an SSH tunnel to your SMTP server, and send your email through that!

I wrote a quick little script to run when you find yourself in this situation: just make sure to run it with admin privileges. Your email client won’t even know the difference!

sudo ./smtp_tunnel.sh

smtp_tunnel.sh:


#!/bin/tcsh

cp /etc/hosts /etc/hostsBackupSMTP
echo "127.0.0.1 [your.smtp.server.com]" >> /etc/hosts
ssh [username]@[your.ssh.server.com] -L 25:[your.smtp.server.com]:25

#restore once the tunnel is broken
cp /etc/hostsBackupSMTP /etc/hosts