Start A Development WebServer – One Liner

Run a webserver at http://localhost:8000 serving current directory (and subdirectories)

$ python -m SimpleHTTPServer
 

Emulate A Mobile Device In A Desktop Chrome

Another tip to debug html5 apps for mobile on your desktop: install the Ripple Chrome Plugin

 

Allow Chrome to do Cross Domain HttpRequest

Very useful to debug webapps:

$ /Applications/Google Chrome.app/Contents/MacOS/Google Chrome --disable-web-security
 

Game Design Lesson – Super Mario Bros (NES)

I read this very interesting snippet in the Wikia Super Mario Bros’ page

“Interestingly, in the very first portion of World 1-1, the developers designed it so that the a newcomer almost always gets a Mushroom. In the first level, there are blocks that the player goes under. A menacing Goomba approaches the player, and instinctively the player jumps over it. By the time the player reaches the Goomba and jumps, they will hit a ? block above that would reveal a mushroom. The mushroom goes to the right, hits a pipe and comes towards the player. Since the mushroom resembles the Goomba, the player thinks to jump over it again. Doing this, however, will almost always lead the player to jump right into the Mushroom since after they jump they hit another block from above which causes them to come back to the ground and hit the mushroom. This was to teach players that Mushrooms were a positive thing in the game.”

 

POST 10 http requests, maximum 5 in parallel, data POSTed is id=1, 2, …

$ seq 1 10 | xargs -t -P 5 -I % curl -s --data id=% http://te.st/ > /dev/null
curl -s --data id=1 http://te.st/
curl -s --data id=2 http://te.st/
curl -s --data id=3 http://te.st/
curl -s --data id=4 http://te.st/
curl -s --data id=5 http://te.st/
curl -s --data id=6 http://te.st/
curl -s --data id=7 http://te.st/
curl -s --data id=8 http://te.st/
curl -s --data id=9 http://te.st/
curl -s --data id=10 http://te.st/
 

Extract HTTP URLs from text (html for instance)

$ curl -s http://www.iopixel.com | grep http 
     | perl -pe "s|.*(https?://.+?)[^/?=.$&+,:;@-a-zA-Z0-9].*|1|g" | sort | uniq        
http://bit.ly/tip-off-ios-store
https://twitter.com/iopixel
...

 

PrettyPrint JSON data

$ echo '{"foo": {"bar":2, "pouet":3}}' | python -mjson.tool
{
    "foo": {
        "bar": 2, 
        "pouet": 3
    }
}

 

Replace in place with perl – converting windows to unix line ending

$ cat -e test.txt 
mop^M$
mip^M$
$ perl -pi -e "s/\r\n$/\n/g" test.txt
$ cat -e test.txt                  
mop$
mip$

 

Extract each characters from a file (to generate a _specialized_ bitmap font for example)

$ cat -e test.txt
Hello World$
$ cat test.txt | perl -pe "s/(.)/1 /g"| tr " " \n | sort | uniq | perl -pe "s/\n//g"
HWdelor

 

Rename all *.png files by their md5sum.png name

$ ls -1
file1.png
file2.png
$ md5sum *.png | perl -pe "s/([^ ]*)  (.+$)/2 1.png/g" |tr " " \n | xargs -n 2 mv
$ ls -1
257ad8424ed04f7a07e4175b1a3d91f3.png
e0ea852f2f2d9564429842f7863af499.png

 

Copy source to dest if source and dest are not the same file

Sometimes I’m using that in makefiles because some (texture) tools regenerate same output when inputs have been modified, so it prevents following rules of being remade

$ function cpifdiff() { diff $1 $2 | cp $1 $2 }
$ cpifdiff source dest
 

Easy Parallelism and Currying in Python – POST 10 HTTP requests, 5 threads maximum

Sometimes the curl|xargs version is not adequate.

#!/usr/bin/env python
import functools, urllib, multiprocessing
def postToHost(host, tid):
    print host, "id=%d" % (tid)
    urllib.urlopen(host, "id=%d" % (tid)).read(

pool = multiprocessing.Pool(5)
postToHostFunc = functools.partial(postToHost, "http://te.st")
pool.map(postToHostFunc, xrange(1, 11))
 

POST 10 http requests, maximum 5 in parallel, data POSTed is id=1, 2, …

$ seq 1 10 | xargs -t -P 5 -I % curl -s --data id=% http://te.st/ > /dev/null
curl -s --data id=1 http://te.st/
curl -s --data id=2 http://te.st/
curl -s --data id=3 http://te.st/
curl -s --data id=4 http://te.st/
curl -s --data id=5 http://te.st/
curl -s --data id=6 http://te.st/
curl -s --data id=7 http://te.st/
curl -s --data id=8 http://te.st/
curl -s --data id=9 http://te.st/
curl -s --data id=10 http://te.st/

 

Sum the size of *sh files in current directory (in bytes)

$ du -b *sh | awk '{sum += $1} END {print sum}'
3709

 

Sum fields

$ seq 1 10 | awk '{sum += $1} END {print sum}'
55