What annoys me with Kiwi is it doesn't work with business class... Enter Auckland <> Europe, outbound any day in March, return any day in September...
"We couldn't find your trip."
Also, it's cities filter is doesn't seem to work. Try putting search above, but economy and inverse cities. It's going to be dominated by flights from London which you can deselect LHR as much as you want to. London sucks for transit if you are outside of UK.
Edit: Google Flights are no better here. Lots of carriers hide biz class prices from Google or take ages to load (some time out happens in Google) or straight up lie ("Ooops! the price has increased" since you last search 10 seconds ago - I'm looking at you Qatar) or mixes in some random Economy class flights out of nowhere...
Yep Air India got some decent prices recently from ARN to SYD (which I've tried and while I'm fine with quite run down upholstery near broken seats, the biggest problem was seat was too short to fully stretch, hence lack of sleep).
The best I've found yet was one of the Chinese carriers Melbourne to Moscow return for 1650 euros...
Oh what nonsense. I travel on Caledonian Sleeper all the time and while the breakfast is perhaps not amazing (although its fine) you only have to exit the train immediately at stops that are not the ultimate destinations (like London, Glasgow etc) there you routinely have up to an hour to get off. The bedding criticism is also odd as it's all white cotton sheets and pretty comfortable. Also Caledonian Sleeper has ordered new coaches and are investing a lot in the service. See: https://www.sleeper.scot/news/85-newtrains The sleeper service is an amazing service and I love the ability to spend £100 and get a room saving a packet on a London hotel. I can arrive in London at 6.30/7.00am head to meetings & then be back in Scotland in 4 hours.
That was the point of the Heap Benchmarks. In CPython you would have to use HeapQ, writing something yourself in Python will be miles off the pace. Whereas in PyPy the Python implementation of a Heap or your own version is comparably fast. As it should be. The 'hunt down the written in C' parts of the standard library is what I am increasingly objecting to.
1. Does anyone know the latest update on NumPyPy? PyPy for me is just not a usable proposition because I heavily use Numpy (and Scipy et al). So I am forced to use slow Python + fast Numpy or slow Numpy + fast Python. Very saddening. The C-Extension is just so off the pace, NumPyPy was meant to solve that quandry.
And I know some smart Alec will trot out the usual 'downshift into C' line that everyone (including Guido) use as the final goto solution for performance but that is simply a disgrace in 2017. Even JavaScript is fast. Why can I not choose to write Python and it be fast?? And yet Python 3 is getting slower. Don't agree?
Look at these benchmarks of Python heaps written in Python (not using the C based builtin heapq) https://github.com/MikeMirzayanov/binary-heap-benchmark Python generally is off the pace but Python 3 is about twice as slow as 2 and miles off JavaScript.
But PyPy is proof that Python can be fast. It makes quote/unquote "Pure Python" within striking distance of Go and and when I run that test suit on PyPy, its similar to the Node.js score. Why does this matter?
Because I want to write bloody Python not C.
And it is so tantalisingly close - look at a blog post like: https://dnshane.wordpress.com/2017/02/14/benchmarking-python... The performance of the Fibonacci Heap that someone wrote in quote/unquote "Pure Python", when run in CPython can never compete with HeapQ (the C based builtin lib), but on PyPy it can. Fast code written in Python. So what are the problems holding back PyPy? I think possibly money and number of devs working on stuff. Javascript had Mozilla, Google, Microsoft and Apple in a browser war + loads of open source input.
But is the biggest stumbling block not Guido himself and the core Python devs? Do they just philosophically not agree with PyPy or is it just disinterest?
Well whatever it is, it is heart-breaking to want to write fast code in my favourite language and leverage all its power including Numpy/Scipy etc and not be able to. And yes my use-case is perhaps quite unique, a very CPU intensive service that ideally computes and returns a real-time calculation (that includes 500k function calls) in 10-50ms.
But getting fast Numpy in the PyPy mix (i.e all the speed of the JIT + no worse Numpy) would be a HUGE step forward for me in PyPy adoption. What is the latest? How can I help?
in short - funding. If we can find someone who wants fast numpy AND fast python under the same hood, we can combine the approaches of cpyext and numpypy and make it fast. The project is just too big to do on spare time. I've been trying to find some funding for that for quite a while, but I haven't been able to find any sizable backer just yet.
Newsflash: code like this[1] will never be fast in CPython, and if you write a lot of code like that and are sad when it's slow then you need a different language, especially if you expect it to be as fast as a JIT compiled language like js on v8. Or use something like Cython.
That benchmark is pretty meaningless anyway, IMO. Here are some halfway decent, official and up to date benchmarks comparing python 2 and python 3[2].
Python 3 is slower in some areas, noticilby startup time, but it's not all doom and gloom. It's faster in a lot of places. And productivity is hard to benchmark, but IMO py3 is way better in the area.
Ah the 'smart alec' has appeared. I'm not stupid, I know code like that won't be faster in Python but PyPy shows that it can be a hell of a lot faster than CPython and right up there with Node.js and the travesty is that CPython is so far off the pace and getting slower
Would you please not post uncivilly to HN, regardless of how annoying you find someone's comment? This kind of thing degrades discussion and provokes worse from others. We're hoping to do better than that here.
i suspect "smart alec" was chosen with exactly the intention you're expecting--ie, i'm not a native English speaker but i believe the term when used by adults to refer to other adults is benign--perhaps like the "G"-rated version of "wise-ass" or "pedant." Maybe wrong here, but benefit of the doubt perhaps....
Python 3.6.1 is a lot faster than 3.3.2 which was used back when the heap benchmark was done. In my system, 2.7.10 vs 3.6.1:
C:\Test\PyBench>py -2 test.py
Done in 1188.308127
C:\Test\PyBench>py -3 test.py
Done in 1454.897614
Please bring up to date benchmarks to the discussion, and stop complaining about old problems.
Note: adjusted I the workload to be 1000 less iterations, to get the results fast for this comment, so this numbers aren't comparable to the list in the github repo. But even if I hadn't done that, they wouldn't be because I ran these in my system.
Python, R and other languages are just slower. If that is an issue I again will agree you need to move to another language.
Can I ask what you need the speed for? How long are your reports running? Because a lot of times the reports run in under a few minutes and people just don't sub-set their data to code on. People feel it is "BIG DATA" when it is just annoying data that takes less then a minute to spit out.
The coalescence around Python and R for numerics to the extent it did was kind of premature. They're both great languages but I've given up on implementations of either of them achieving the kinds of speeds that they should be at for the things people are using them for.
I think there'll be more traction expanding the libraries for Julia, Nim, or Kotlin, all of which are much faster than Python, and similar in expressiveness. It's probably easier to create an optimization/ML/linear algebra/RNG/whatever library in Julia, Nim, or Kotlin than trying to get good performance out of R or Python.
I completely understand PyPy and numpy and why Python and R became popular, because there was a need for expressivity in the numerics space and other languages weren't offering it. But if you've been following them for long enough, it's clear that both of them had problems under the hood. I think people just crossed the boundary of appropriate use at some point, because the language syntax is so appealing for these sorts of things.
Maybe I'm wrong about all this and Pypy will deliver but I'm not holding my breath any longer. No offense to the Pypy developers--I'm immensely impressed with their work, and they've produced much more than I ever thought--but I do see some sort of asymptote. I think it would take some serious corporate influx of effort like what happened with javascript, and even then I worry that compatibility issues would rear their head. My guess is the Python 2-3 split would become a Python 2-3-Pypy split--maybe that's fine though.
I'm not a "smart alec" for pointing out that Python is, was and will be bad at heavily numerical, number crunching code. It's not what Python is built for.
I mean... it takes 28 bytes to store a single integer in Python.
this is silly, python is used by some of the biggest number crunchers on earth ... as glue code. If you dont like the dual language paradigm and want fast code go learn fortran.
It's not really a travesty. CPython just isn't designed for that kind of workload, and that's fine, because not many people use it to do that kind of thing.
I think the core devs are doing a fine job with CPython as the reference implementation and developing PyPy takes a different set of expertise; JIT and compilers specifically.
I think sponsorship of PyPy would be welcome -- but it seems non-obvious where that would come from.
Javascript has the fortune of being the language that drives a very important platform -- and Chrome has been a particularly strategic investment for Google to have more control over the web than it ever has before. Java has Android... Python unfortunately doesn't have that sort of standing in any area that I'm aware of.
And that would be fine, and my mind goes to similar examples like Lua where the reference and the JIT versions co-exist but PyPy has not had the impact that LuaJIT has had (for example) on the LUA community.
I agree writing C extensions is not a solution. It is evidently too hard. Even the standard pickle library in Python 3 has a memory corruption bug. http://bugs.python.org/issue23655
We [msft python team] tried to get PyPy some funding, but it didn't go very far. I'll keep trying. We've also started this project to enable jitting for CPython:
I understand that Numpy is fast in CPython because it relies on high speed code being done in C.
What you could do is much simpler --- :
Split your Python application in two parts:
1. Keep your functions that make heavy use of Numpy and Scipy under CPython; expose your algorithms/functions as a web service/REST service/etc running under CPython.
2. The rest of the application, which of course needs to call the functions in (1), can be written in PyPy and call the web service in (1). This is where you would put the general-purpose stuff like web, graphics, database access, and of course all symbolic manipulations that do not require Numpy/Scipy.
> And I know some smart Alec will trot out the usual 'downshift into C' line that everyone (including Guido) use as the final goto solution for performance but that is simply a disgrace in 2017.
Easy gluing of other languages together has long been something I considered a strength...but I suppose to each their own.
> Why can I not choose to write Python and it be fast??
Well there are lots of reasons...including implementation issues and I don't know them all...but I think Python has a very clear productivity niche. Personally, I am ok with Python trading performance for productivity. For the most part, I haven't had Python be so much of a bottleneck that writing a very small part of logic to be performant hasn't solved my use case.
> And yet Python 3 is getting slower. Don't agree?
Yeah I don't agree...that benchmark uses Python 3.3. The corner on Python 3 performance over Python 2 started turning around 3.4. Perhaps a talk from this years PyCon would help illustrate:
Indeed, I would even say that Cython is even more proof that there are frontiers of performance that could be explored. But with PyPy (as with Cython) their are sacrifices you have to make.
Personally, I think the most promising performance improvement that is tantalizingly close for me is Larry Hasting's Gilectomy project:
But at the same time, I am not sure that Python ever needs to be fast running in CPython. With `WASM` perhaps it is better to just compile Python.
I don't know, performance in Python has always been a mixed bag...but personally I think it doesn't get much focus because it doesn't really serve Python's target niche. I don't know if there ever will be (or should) be 1 language to do everything...and as it is Python is a good "productivity" focused language to have in your toolbox so-to-speak.
Then you probably should not use Python, python is more of a glue language which you should strive to make your program looking like a business logic, in real word to solve this problem you would write code such as this:
import time
if __name__ == "__main__":
start = time.clock()
N = 10000000
h = list(range(N))
h.sort()
for i, v in enumerate(h):
assert(i == v)
print("Done in %f" % ((time.clock() - start) * 1000))
$ python3.6 heap.py
Done in 2389.877000
Or if heap needs to be used:
import time
from heapq import heapify, heappop
if __name__ == "__main__":
start = time.clock()
N = 10000000
h = list(range(N))
heapify(h)
for i in range(len(h)):
assert(i == heappop(h))
print("Done in %f" % ((time.clock() - start) * 1000))
$ python3.6 heap.py
Done in 10716.348000
Micro benchmarks are silly because you'll never do those things in real code.
You can read this great article just released about the Python 2017 language summit: "Keeping Python Competitive" [1]. There you can read opinions by many core developers. Pypy is also discussed
Unlike you, Python is _not_ my favorite language, but the matplotlib lock-in is real.
Hopefully a matplotlib-equivalent will materialize on Clojure (where Linear Algebra is plenty fast and the language itself is fast-enough out of the box) so I can be done with Python forever.
I can't answer how numpypy is going, but Numba works pretty well for me to write fast numeric code in python. A bit restricted language and installing llvm is a bit of a hassle, but overall it's great.
What are you currently using to solve that problem? I've ran into that problem too, and had to use C-extensions to make my code faster, which isn't ideal.
I did not run any benchmarks and OTP is not directly comparable to GraphHopper as feature set is quite different. So this is just my impression an take it with a grain of salt.
My impression is that GraphHopper is fast and does not take much memory whereas OTP feels quite "heavy" on comparable datasets (city-area level OSM data). So my hope is that if GraphHopper would support timetable routing with the same quality, it will probably beat OTP.
My ultimate goal is to have open-source based multimodal routing on the coutry level (Germany) which would include public transport as well. Frankly, importing country-level datasets in OTP seems unrealistic. GraphHopper on the contrary seems very promising.