Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. It's slow, even for simple microsecond computations like log(2). Takes about 5-20 seconds to load a page on my 1Gb fiber connection. Opening Python/SymPy Gamma is much faster for most things. https://gamma.sympy.org/input/?i=log%282%29

2. Every time I use it, a box saying

    NEW: Use textbook math notation to enter your math. TRY IT
pops up over the result, and clicking the X doesn't hide it the next time I search. This adds ~3 seconds to the result time.

3. I'm a long-term Mathematica user, but typing literal Mathematica syntax usually never works, except for simple expressions.

4. Results are PNGs, and copy-pasting a numerical result takes a few unnecessary clicks. "Plain Text" > Copy.



> Takes about 5-20 seconds to load a page on my 1Gb fiber connection

Wolfram Alpha is implemented in Mathematica, which --- to understate the situation --- was never intended as a high performance backend server language. I suspect that's the reason for the bad performance.

"As a result, the five million lines of Mathematica code that make up Wolfram|Alpha are equivalent to many tens of millions of lines of code in a lower-level language like C, Java, or Python." [1]

Sure, there's something to be said for implementing logic in high-level code, but without a plan for lowering that high-level logic to machine code in a way that performs well, you're setting yourself up for long-term pain.

[1] https://blog.wolframalpha.com/2009/05/01/the-secret-behind-t...


I doubt the bad performance is due to evaluating expressions itself. If I type N[Log[2]] into Mathematica, it evaluates in less than a millisecond. It's probably because Wolfram Alpha is using natural language process to try to process my query and then finally deciding that by N[Log[2]], I mean N[Log[2]]. And it's probably not because of that, but because their grid scheduler isn't optimized for sub-second latency.


Ha, hearing the word "process" in Wolfram's voice, there.


Big fan! No, I mean Stephen Wolfram is a big fan… of Stephen Wolfram


> Sure, there's something to be said for implementing logic in high-level code, but without a plan for lowering that high-level logic to machine code in a way that performs well, you're setting yourself up for long-term pain.

Whatever the reason for the performance issue (I don't know enough about WA to speculate what/why/how), I feel like noting the existence of the wolfram compiler[0] and the various language interfaces[1]. Anyone interested in using Mathematica/WL might get a kick out of exploring those more, at the very least.

[0] https://reference.wolfram.com/language/Compile/tutorial/Over...

[1] https://reference.wolfram.com/language/guide/CLanguageInterf... (a lot of the paclets are bindings for C libraries too)


Mathematica is extremely performant for most of the built-ins, the overhead of interpretation is nearly negligible for all but the tiniest operations.

There is also no reason to think that their request-response boilerplate is written in Mathematica, Mathematica is fully integrated with a lot of languages and runtimes.


> Opening Python/SymPy Gamma is much faster for most things.

Is there a way to make it plot multivariate functions? I tried but whenever I enter two variables it says "Cannot plot multivariate function." I've seen many Python packages plotting multivariate functions so I'm convinced it should be possible.


I don't think so. You'd need to run it in a terminal with something like

    from sympy.plotting import plot3d
    x,y=symbols('x y')
    plot3d(x*y, (x, -10,10), (y, -10,10))


I usually use python for math stuff also, however I think the log(2) example is maybe the wrong example. I basically got an instant result for that (just recorded this): https://imgur.com/a/g5slHsR


Your Internet bandwidth is not relevant when talking about a compute-heavy backend like this. Wolfram|Alpha is not going to load any faster on a 1Gbps connection than it will on a 20Mbps connection, other than some static assets, but even that isn't going to be hugely noticeable if we're talking about 2ms RTT on fibre vs 8-20ms RTT on cable/DSL. If you're downloading a giant file off a nearby CDN, then sure, 1Gbps fibre is useful. I can max out my 1400Mbps cable connection downloading things this way (it's mind-blowing...), and my latency to my upstream gateway outside of my house is 8ms. But Wolfram|Alpha isn't going to load 40% faster for me than it will for you since it's I/O bound and your end-to-end latency is waiting for the backend to complete your request.

I will say, though, that Wolfram|Alpha could be "optimised" in the sense that it could do less fancy JS and be a simple box with a submit button, like SymPy Gamma.


I think that's the point. "My internet speed is fast enough that it is not the cause of slowness, so any delay is all on Wolfram|Alpha."


Throughput is not latency, though. 1gbps on a dedicated line is not the same as 1gbps on an oversubscribed residential node.


If I didn't include that note, someone would say "Is is slow because you're on 56kbps dial-up?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: