I really like the idea behind beaker, but last time I played with it, the main issue/concern occurs for me when using a somewhat large (uses most of machine RAM) dataset, since using it in another language creates an additional copy of the data in memory for the other language to use. This multiplies the memory used by the number of languages that need an instance of the dataset. If there could be shared memory for datasets somehow, it would be much more useful (if they've figured that out since I last used it, please tell me).
It's kind of weird to use but it works for the most part. You can clean up some data in python, then push the data over to a cell written in R to do some other evaluation, then push the results back over to python.
http://beakernotebook.com/