$ uname -aPartially [ via reddit ]
Darwin peregrine 10.6.0 Darwin Kernel Version 10.6.0: Wed Nov 10 18:13:17 PST 2010; root:xnu-1504.9.26~3/RELEASE_I386 i386
$ python --version; time (python -c "print ''.join([chr(int(x, 16)) for x in '54 59 20 66 6f 72 20 74 68 65 20 75 70 76 6f 74 65'.split()])")
Python 2.6.1
TY for the upvote
real 0m0.039s
user 0m0.014s
sys 0m0.015s
$ ruby --version; time (ruby -e "puts '54 59 20 66 6f 72 20 74 68 65 20 75 70 76 6f 74 65'.split.map{|a|a.to_i(16)}.pack('c*')")
ruby 1.8.7 (2009-06-12 patchlevel 174) [universal-darwin10.0]
TY for the upvote
real 0m0.012s
user 0m0.002s
sys 0m0.003s
$
Update: Well, my friend Chuck over at evilchuck.com pointed out that my original post is a completely pointless comparison, since both interpreters are doing practically nothing.
He is absolutely right. If this were used as a "benchmark test" it would be laughable. Well I included the time stats, so I guess I asked for it.
Here is a little test program for generating N sets of random hex strings 17 characters long and then decoding the sets one at a time.
Ruby code:Blargh:
$ time ( ruby test.rb 500 )And here's a test for some python code that I hacked together that is probably really crappy but seems to do the job.
wnATVthnfLdBCQguVr
[ Large output elided. ]
jPgUoj1VtucFgrt7iv
real 0m0.051s
user 0m0.030s
sys 0m0.005s
$ [ With n = 5000 ]
VyiKEjdJhjMKHPRqpP
[ Larger output elided. ]
Uh3 Wsg2 HUbAAe04J
real 0m0.461s
user 0m0.293s
sys 0m0.017s
$ [ With n = 50000 ]
yKfTGOrtCJTtyWvWJk
[ Even larger output elided. ]
QkxCNYwEFVund6H8kz
real 0m3.991s
user 0m3.238s
sys 0m0.136s
$ time ( ./test.rb 100000 -q )
real 0m6.531s
user 0m6.183s
sys 0m0.114s
$ time ( ./test.rb 1000000 -q )
real 1m42.502s
user 1m17.887s
sys 0m2.240s
$
Python version:Running it from my bash shell produces:
$ time ( ./test.py 500 )Clearly, Python is much faster than Ruby for large sample sizes. At 1M, Ruby finishes in 1m43s real and 2.24s sys. Python finishes in 43.5s real and 0.565s sys.
lFy 7ZGqT1Oc BT31
[ Elision. ]
fnpLoKCaYFWL9U9SE
real 0m0.089s
user 0m0.047s
sys 0m0.024s
$ time ( ./test.py 5000 )
Hqa2g4iX6Nw8Nzua5
[ Elision. ]
ofMMAG WnxUEP0RVC
real 0m0.345s
user 0m0.282s
sys 0m0.030s
$ time ( ./test.py 50000 )
kVgC4etTXn5vxrLzV
[ Elision. ]
ot8PSkiWyn1KTitXT
real 0m3.402s
user 0m2.647s
sys 0m0.108s
$ time ( ./test.py 100000 -q )
real 0m4.403s
user 0m4.241s
sys 0m0.064s
$ time ( ./test.py 1000000 -q )
real 0m43.506s
user 0m42.172s
sys 0m0.565s
Just for fun, I tried running the ruby code with Ruby 1.9. This isn't fair since I am not going to go to the trouble of installing Python 3.0.1, but it doesn't matter. Ruby 1.9 is significantly faster than 1.8.7, but still can't beat Python 2.6 for this particular test.
$ time ( ruby19 test.rb 1000000 -q )I was curious, so I tried to isolate the decoding part of the two versions. Here's what I found:
real 0m49.606s
user 0m47.847s
sys 0m0.764s
$
$ ruby19 test.rb 1000000 -qLooks like the decode method written in Python is executing at least 3 seconds faster.
user system total real
16.750000 0.310000 17.060000 ( 17.843644)
$ ./test.py 1000000 -q
1 function calls in 14.241 CPU seconds
Ordered by: internal time, call count
ncalls tottime percall cumtime percall filename:lineno(function)
1 14.241 14.241 14.241 14.241 test.py:18(decode)
$
These tests are not intended to be benchmarks of any kind. True benchmarking is a science in which I have not educated myself very well. I just wanted to get a feel for some of the differences in Python and Ruby idioms and a rough comparison of their execution time. My guess is that if anyone were to take a close look at the code used in this post, they'd be able to point out several flaws and mistakes that would affect the overall statistics.
Update 2: Thought I'd try JRuby, and...
$ time ( jruby test.rb 1000000 -q )Wow.
real 0m28.038s
user 0m29.531s
sys 0m1.310s
$
No comments:
Post a Comment