Hi HN,
We’re excited to announce that Phind now defaults to our own model that matches and exceeds GPT-4’s coding abilities while running 5x faster. You can now get high quality answers for technical questions in 10 seconds instead of 50.
The current 7th-generation Phind Model is built on top of our open-source CodeLlama-34B fine-tunes that were the first models to beat GPT-4’s score on HumanEval and are still the best open source coding models overall by a wide margin: https://huggingface.co/spaces/bigcode/bigcode-models-leaderb....
This new model has been fine-tuned on an additional 70B+ tokens of high quality code and reasoning problems and exhibits a HumanEval score of 74.7%. However, we’ve found that HumanEval is a poor indicator of real-world helpfulness. After deploying previous iterations of the Phind Model on our service, we’ve collected detailed feedback and noticed that our model matches or exceeds GPT-4’s helpfulness most of the time on real-world questions. Many in our Discord community have begun using Phind exclusively with the Phind Model despite also having unlimited access to GPT-4.
One of the Phind Model’s key advantages is that it's very fast. We’ve been able to achieve a 5x speedup over GPT-4 by running our model on H100s using the new TensorRT-LLM library from NVIDIA. We can achieve up to 100 tokens per second single-stream while GPT-4 runs around 20 tokens per second at best.
Another key advantage of the Phind Model is context – it supports up to 16k tokens. We currently allow inputs of up to 12k tokens on the website and reserve the remaining 4k for web results.
There are still some rough edges with the Phind Model and we’ll continue improving it constantly. One area where it still suffers is consistency — on certain challenging questions where it is capable of getting the right answer, the Phind Model might take more generations to get to the right answer than GPT-4.
We’d love to hear your feedback.
Cheers,
The Phind Team
In the positives of Phind:
* Phind was able, even eager, to recommend specific libraries relevant to the implementation. The recommendations matched my own research. GPT-4 takes some coaxing to get it to recommend libraries. Phind also provided sample code using the libraries it recommended.
* Phind provides copious relevant sources including github, stackoverflow and others. This is a major advantage, especially if you use these AI assistants as a jumping off ground for further research.
* Phind provides recommendations for follow on questions that were very good. One suggestion to the Phind team: don't remove the alternate follow on questions once I select one. A couple of times it recommended a few really good follow up questions but as soon as I selected one the others disappear.
In the positives of GPT-4:
* GPT-4 gave better answers. This is my subjective opinion (obviously) but if I was interviewing two candidates for a job position and using my question as the basis for a systems-design interview then GPT-4 was just overall better. In many cases it added context beyond my question, recommending things like logging and metrics for example. It seemed to intuit the "question behind the question" in a much better way than the literal interpretation of Phind. This is probably highly case-dependent, sometimes I just want an answer to my explicit question. But GPT-4 seemed to understand the broader context of the question and replied with that in mind leading to an overall more relevant response.
* GPT-4 handled follow-up questions better. This is similar to the previous point - but GPT-4 gave me the impression of narrowing down the scope of the discussion based on the context of my follow-up question. It seemed to "understand" the direction of the conversation in a way that felt like it was following context.
NOTE: this was not a test on coding capability (e.g. implementing algorithms) but on using these AI coding assistants as sounding boards for high-level design and architecture decisions.