def fizzbuzz(n):
outstring = ""
outputs = {5: "buzz", 3: "fizz", 7: "bar"}
for x in sorted(outputs.keys()):
if n % x == 0:
outstring += outputs[x]
if outstring:
return outstring
return n
Then all you gotta do to add more prime numbers is stick them in the dict (or factorize composite numbers you want to put in). Which is more idiomatic in python than endless if elif elif elif elif.
Really? I actually liked that solution, I think it's both readable and clever. The rule priority is defined by the order of your list of tuples ('fizzbuzz' vs. 'buzzfizz'), so easy to change. That's superior to the solution above with sorted(outputs.keys()).
I would say my solution is borderline. It requires maybe one or two extra reads to grok in its entirety due to multiple conditionals, one of which is in a loop and the other outside. Here's a more readable version for someone looking at it the first time, or who is not too used to Python:
OUTPUTS = [(3, 'Fizz'), (5, 'Buzz'), (7, 'Bar')]
def fizzbuzz(i):
result = ''
for x, word in OUTPUTS:
if i % x == 0:
result += word
return result or str(i)
Even the final `or` could be broken out to an `if not`, but in this case it's so simple I prefer putting it together, much like a coalescing operator.
And then for shits and giggles we can say screw it and put it all in one line:
print '\n'.join(''.join(word for x, word in [(3, 'Fizz'), (5, 'Buzz'), (7, 'Bar')] if i % x == 0) or str(i) for i in xrange(1, 101))
Here's mine. Almost exactly the same as yours but written a tiny bit different. Also I wrote the complete program.
#!/usr/bin/env python3
# Word of factor
wof = { 3: 'Fizz', 5: 'Buzz', 7: 'Bar' }
def fizz (i):
out = ''.join([wof[k] for k in sorted(wof.keys()) if not(i % k)])
return out or str(i)
for i in range(1, 101):
print(fizz(i))
Some notes on my version:
1. I used dicts like the parent to yours. Tuples are almost just as fine and you do get a little bit nicer code in the part that you say
word for x, word in outputs
where our say
wof[k] for k in sorted(wof.keys())
However, when I wrote mine I imagined myself sitting in front of an interactive interpreter session.
Now let's say that we've each typed in the code we have here and then the interviewer says "now make it say Bazinga when the number is divisible by 19, and keep the order like with FizzBuzzBar".
Since my wof is global, I can add to it directly
wof[19] = 'Bazinga'
But let's pretend that yours was global as well
outputs.append((19, 'Bazinga'))
So aside from the global / not global we are on equal ground thus far.
But then the interviewer says "now make it say Fnord when the number is divisible by 13, and keep the order still".
Then once again I can simply
wof[13] = 'Fnord'
Whereas if you did
outputs.append((13, 'Fnord'))
then now you are out of order.
So you have to do some extra steps to ensure it's in order. Either you have to redeclare the whole thing (imagine there were one million pairs of numbers and strings!), or you have to sort it, or you have to find where you are going to insert and split it in two and join it around.
Not saying that mine is in any way better, just felt like sharing how I thought about this :)
2. (Wow note #1 was looong, sorry 'bout that.) I prefer splitting up my code like
out = ''.join([wof[k] for k in sorted(wof.keys()) if not(i % k)])
return out or str(i)
as opposed to your all-at-once
return ''.join(word for x, word in outputs if i % x == 0) or str(i)
In my opinion splitting it exactly the way I did is more readable. Again, not a critique against what you wrote. Esp. since you said it yourself that your code was a bit tongue-in-cheek.
3. Even when split like I did there, I actually don't think my
out = ''.join([wof[k] for k in sorted(wof.keys()) if not(i % k)])
is all that nice either.
One thing, which others have pointed out about this often is that too much is happening in one line but also there is another issue.
We all make typos or logic errors every now and then. In my exprience it's much more difficult to debug list comprehensions compared to... well whatever we call that which is not list comprehensions. Both with regards to error messages and with regards to following what's going on. I maintain that this is true even if you have a habit of using a lot of list comprehensions.
Cool, this is another valid way of doing it and I suppose it comes down to preference.
A few notes I'd like to mention though, that are not relevant to our toy examples, but may have implications in a production environment:
1. Unless you intend to store the resulting list from a list comprehension, it's good practice to prefer generator comprehensions because they don't grow in memory as the number of iterations increases. Obviously in our examples the sample size is so small that it doesn't matter, but the difference between ''.join(x for x in y) and ''.join([x for x in y]) can grow very large if y consists of many thousands of items.
Here's an example with just integers which requires over a million items before it starts becoming a concern – ultimately it depends on the memory footprint of each item:
In [2]: %memit sum(x for x in xrange(1234567))
peak memory: 46.87 MiB, increment: 0.00 MiB
In [3]: %memit sum([x for x in xrange(1234567)])
peak memory: 68.52 MiB, increment: 11.78 MiB
2. I still prefer the list of tuples in this situation as you have control of word order (you may not want to go through the factors in ascending order). There's [x, z].insert(1, y) which would change the list to [x, y, z] to avoid having to add to the end. Finally, because it's a list you can very easily do a one-time cost in-place sort:
Again, it's silly to talk about performance in our toy examples, but small details like these can actually have memory and execution time implications if you don't consider the differences.
Ultimately, the best advice is to always write the code as readable and maintainable as possible and then optimize. Especially when dealing with a language like Python which was built for expressiveness, not leading performance.
By the way, speaking of sum, I find it a bit strange that Python allows
'hello' + 'world'
but not
sum(['hello', 'world'])
Intuitively I would have expected the latter to be possible given the former but I guess it comes down to how the + operator and the sum function are implemented in Python, such that counter to my expectation sum is not a function that "applies the + operator" to it's arguments. The notion that this is how it should work stems from my impression that "sum" belongs to the same family as do "map", "reduce" and "apply" -- that these are somehow "functional" in nature in the sense that is observed in the Lisp family of languages.
Basically the first value is always added to 0. This has one important difference which can be a valid compromise if you expect to almost always sum numbers. If you try to call add([]) without the initial 0, you'll get an error because there's no way to know what the zero value is.
In a typed language you could use type inference and use the inferred type's zero value (if the language has such a concept for all types like for example Go does). In Python I guess you could fall back to None, but then you'd have code that doesn't behave consistently for all inputs.
Associative arrays form a monoid, so this code is (perhaps surprisingly) a special case of the monoid library.
It has been awhile, but I remember not including a version like this because it's a bit intense to throw a sort (either implicit or explicit) into this without even commenting on it.
You can form a monoid with almost everything. It'S not a very interesting concept that requires more attention than maybe reading the definition. These blogposts are pointless in my opinion. The title makes as much sense as "Overthinking cooking with edibles".
Monoids are certainly not the most "interesting" thing, but neither are arrays, or even sets. But they're quite important and useful in a parametrically polymorpic world. That's why it's worth introducing them to people new to FP.
Ultimately, these sorts of posts are feeders to conversations about other bigger abstractions like Functors.
The title and the post both grant that for fizzbuzz as asked in many settings, but without a simple domain these conversations tend to explode in complexity. So there it is.
Yea, I'm all for overthinking things, but as a solution to fizzbuzz, this article is silly. You don't need any complicated control flow or logic, or functional primitives, for an extensible, readable fizzbuzz.
Well, of course you do not "need" solid engineering to express the solution to a trivial problem. The point is not what you need to solve fizzbuzz, it's to give people a taste of what it feels like to do so.
And the result is: "oh, it feels normal, but I write less of my own code and reuse existing control flow code." Which is a lot of the point. It's also to point out simple abstractions like Monoid can find uses even in simple problems.
The thing is, you're not only complicating your code by introducing monoids, you're unnecessarily restricting yourself. The only thing you need is higher-order functions to be sufficiently general:
let every m s i =
if i mod m = 0 then Some s else None
let fallback i l =
if l = [] then [ string_of_int i ] else l
let join l = String.concat "" l
let fizzbuzz gens i =
List.filter_map (fun f -> f i) gens |> fallback i |> join
let main () =
let fizzbuzzstd = fizzbuzz [ every 5 "Fizz"; every 3 "Buzz" ] in
for i = 1 to 100 do
print_endline (fizzbuzzstd i)
done
I've pulled out `join` and `fallback` as separate functions to show that you can also parameterize the code over these functions. And you can use other patterns instead of "every n-th entry" by using generators other than `every`.
This approach can make joining O(n) as opposed to O(n^2) (naive concatenation) or O(n*log(n)) (ropes) worst case and is more general. For example, you can also generate numbered lists for each line ("1. Fizz 2. Buzz"). Yeah, that's silly for FizzBuzz, but we're sort of assuming that this is a placeholder for a real problem that we may want to generalize or tune for performance.
And if it's actually a monoid and we know nothing more about it than that, we can just go with `List.fold_left` as the join function. Works for all monoids, too, without mentioning the word.
First of all, thanks for your post. I don't think your code is wrong or bad in any way.
> The thing is, you're not only complicating your code by introducing monoids, you're unnecessarily restricting yourself.
Strictly speaking? I agree. All we need for this problem is semigroups. We never explicitly need a "monoid." But I'm not sure offering the "mempty" case is really over-complicating the code, it just bloats the rhetoric a wee bit.
But I think what you're referring to is the conceptual overhead of Monoids? I'll get to that in a sec.
> The only thing you need is higher-order functions to be sufficiently general:
> code follows
For "sufficiently" general code, it certainly seems to be less general than a piece of code relying on monoid or semigroup operators. You've hardwired yourself to strings and arrays.
F# actually is doing a natural transformation for you from (Maybe~>Maybe String) implicitly in there, so you're really relying on the fact that F#'s got some monoid logic baked in.
> And you can use other patterns instead of "every n-th entry" by using generators other than `every`.
Right, the essence of fizzbuzz is in the list of partial applications of every onto a list of prime factors and strings. Everything else is repetitive joining logic. Which is why appealing to a pattern that captures nearly all that logic seems like a good idea, no?
> This approach can make joining O(n) as opposed to O(n^2) (naive concatenation) or O(n*log(n)) (ropes) worst case and is more general.
I do not see how it's more general unless you demand non-associativity which—while fair to do in an interview question—is quite unheard of in the programming world. In terms of runtime performance, rope concatenation is constant and a full retrieval is O(n + log n), not O(n log n), so I'm not sure what you're getting here. But it's worth noting this code (like a lot of F# code) tends to pin onto exactly one data structure and can't simply imply the data structure from calling types.
> Works for all monoids, too, without mentioning the word.
There's an implicit value judgement here: that not naming this pattern is better. You mentioned it above with the "you complicate your code by introducing monoids." I'm curious why the default mode is that I need to defend naming a pattern that obviously saves code here, and captures more use cases. I don't mean this to be a contentious discussion, I just want to know why you think it's inherently bad.
Less? Even ignoring the glut of imports, it's significantly longer than the straightforward and more extensible attempt where you just chain a few `if`s. (The article did strawman that attempt, but it was transparent enough about it.)
loop i = 0..100 {
out = ""
if i % 3 == 0 { out += "Fizz" }
if i % 5 == 0 { out += "Fizz" }
if i % 7 == 0 { out += "Fizz" }
if out == "" { out = str(i) }
display out
}
It's about 0.7x the length of the final Haskell version, ignoring indentation and imports, but I don't really care about golfing. My point was simply that your statement that you "write less of [your] own code" was not justified.
> It's about 0.7x the length of the final Haskell version,
The final haskell version with the explicit "show" does 2 things your example here does not: offer parameterized final case, and use a rope data structure for the strings.
But in a very real sense your code is a special case of the haskell code c_wraith offered. You're using a builtin monoid (strings) and explicit control flow to do it, so your code can't be reused in other contexts.
If I tactically asked you for a feature that made the output = '' in some cases, you'd have to refactor pretty significantly. The haskell version not only doesn't have that weakness, but it can even be used to do more abstract things like count the number of times Bar _would_ be printed if we wanted to do that.
> but I don't really care about golfing
Okay, but for someone that's indifferent you are sorta gut punching c_wraith's code by penalizing haskell for explicit modules and imports for libraries, writing code in a style where I can explain it line-by-line, and comparing a simplified version to my most complicated version.
> My point was simply that your statement that you "write less of [your] own code" was not justified.
It's a debate I'm willing to have, but I think my blog post defends this point. That code lacks any sense of abstraction over very common patterns, and it ultimately ends up appealing to a builtin monoid and its operations to accomplish the same thing.
It's fizzbuzz, so it's hardly a crisis. But the point of this entire discussion here is about how even in these small cases we can see big abstractions appear and be relevant. I'd rather add 4 import lines and lean on code tested till the cows come home than pretend I can't ever have a bad day writing bespoke logic.
Edit: it occurs to me that implicitly in this assumption is the notion that we must defend using simple, wide-reaching abstractions. Can you give a good reason why something as trivial as "monoids" isn't exactly the sort of thing we should all be using, as opposed to bespoke logic that on a good day conforms to this pattern? Why _wouldn't_ we?
The raw code is more parameterizable than the list, since it is customized with the extremely powerful language "pseudocode". It does so with no loss of concision and is no less abstracted.
> use a rope data structure for the strings
It's pseudocode. Change the first line of the loop to `out = rope()` if you care.
> you are sorta gut punching c_wraith's code by penalizing haskell for explicit modules and imports for libraries
Given I have explicitly excluded this every single time I mention it, I don't know why you are saying this.
> The raw code is more parameterizable than the list, since it is customized with the extremely powerful language "pseudocode". It does so with no loss of concision and is no less abstracted.
I can't tell if you're joking or not here. But let's pretend you're not.
> Given I have explicitly excluded this every single time I mention it, I don't know why you are saying this.
Sorry, I thought you were comparing WITH imports to the proper version. In that case the apples-to-apples version comparison is 0.625x (5/8 or 4/7) between your pseudocode and real world code with abstractions.
I'm... pretty happy with that! Even the version that uses ropes, can be reused in many contexts, and relies on external and highly tested code is only 1 line longer in all cases.
Thank you, I hadn't really considered psuedocode as a valid competitor but now I realize it makes my blog post look a lot better.
No where near compact (or possibly obtuse?) enough for a Haskell version, surely? Here's one I found elsewhere on the net sometime ago:
(~>) :: (Integral a) => a -> String -> a -> Maybe String
(n ~> str) i = str <$ guard (i `mod` n == 0)
fizzbuzz = 3~>"Fizz" <> 5~>"Buzz"
main = mapM_ putStrLn $ catMaybes $ fizzbuzz <$> [1..100]
At least it generalises nicely to arbitrary fizzbuzz numbers! (edit: as noted below this version as written fails to print the non fizzbuzz digits. An exercise for the reader :) )
TBF, I wrote this article 5 years ago. Monad Comprehensions were in fashion at the time.
Now every dang person wants to write an arrow with a tilde in it and exclaim, 'Naturality!' ;)
Gotta love the use of guard in this. Maybe I should revisit.
I suspect there is also a clever way to do this applicatively, but haven't really noodled with it to see if it lines up. Any chance to use applicativeDo!
Point! I did have something like that in the original version, but was experimenting with other approaches and lost the digits somewhere along the way.
For those who don't speak Haskell, its worth noting that "<>" is the monoid concatenation operator. The "~>" operator returns a function type "a -> Maybe String". A function type is a monoid if its return type is a monoid, so this works.
from functools import partial
def rem(i, n, message):
return message if not (i % n) else ''
[(''.join([
partial(rem, message='fizz', n=3)(i),
partial(rem, message='buzz', n=5)(i)
]) or i) for i in range(1,101)]
vs
{-# LANGUAGE MonadComprehensions #-}
module Main where
import Data.Monoid (mappend)
import Data.Maybe (fromMaybe, listToMaybe, maybe)
import System.Environment (getArgs)
fizzbuzz i = fromMaybe (show i) $ mappend ["fizz" | i `rem` 3 == 0]
["buzz" | i `rem` 5 == 0]
-- mapM_ is our iterator, putStrLn writes to console.
main = mapM_ putStrLn [ fizzbuzz i | i <- [1..100] ]
As the author of the article, I feel like you've missed the point of this article if you're writing a "vs" post.
The important point here is that monoids make this easier and less full of unique logic. The code you've posted here is clever and thoughtful, but still commits to the same "we don't abstract over monoids" problem I identified. You saved 4 lines over other versions and probably lost clarity in your quest for that.
It's worth noting that you are leveraging a built monoid: strings. '' = mzero and join is your + operation.
Sorry, it is very early in the morning so I'm still waking up. I didn't mean anything by the groggy 'vs' other than a rosetta syntax comparison to Python.
With that being said, I don't think my snippet lost clarity and is functionally (har har) equivalent to the Haskell.
I believe where Haskell would shine is if the mappend operated on a more complex type than a regular string, but honestly I've only started reading learn you a Haskell two days ago so maybe I'm missing something deeper?
Edit: I guess you could say Python strings are monoids and maybe works on ''.
I think the responses you get here is largely due to the "language jumping" which instantly turns it into a language comparison.
If you'd stuck to one language, and implemented the "machinery" in that, it'd perhaps be clearer that the point wasn't C/Python/Ruby vs. Haskell.
E.g. here's a generic version in Ruby that to keep it simple just treats an Array as Maybe, with the empty Array as Nothing, giving us mappend and fromMaybe (we could make this more convoluted with a lazy "mappend" too, but it'd make the code even more un-Rubyish) like these:
def mappend(*args) args.map(&:call).compact; end
def fromMaybe(default); [yield].map{|m| m.empty? ? default : m}.first; end
And just as clear and extensible as the Haskell version.
I think the issue with using this as an example is compounded by making it one where you can make the lambda's generic over a simple hash lookup, as well, which makes it easy to overlook that the above can also handle more divergent rules (e.g. ones not relying on remainder, for example)
I took a course that focused on Haskell in college. I enjoyed it. I love reading articles about how neat Haskell is. I've still never been convinced that I should use Haskell over a more mundane language for any actual project.
Depends on what the project is. If parsing is the biggest part of the project then I would say Haskell is top-class in that respect. It has proven success stories. However, due to the lack of control over GC it has latency issues with certain kinds of work and it's not a great choice for writing demanding games.
For me, one of the big holes in the Haskell story is a great, idiomatic GUI toolkit. That makes it a little more difficult to use for standard desktop applications, but not impossible (after all, there are binding to common frameworks like GTK etc).
On the GUI toolkit issue, I'd recommend combining gtk2hs (which is a thin wrapper around GTK) with Reactive Banana. GTK signals are isomorphic with Banana events and GTK attributes are isomorphic with Banana behaviors. So you can connect up a button press signal from GTK to an Event, transform that event with data from some other behavior, and hence generate a new behavior which is connected to the GTK attribute of the text displayed in a text box. Your application logic is then expressed as the plumbing that pipes data from event sources to event and behavior sinks. This works well because the event plumbing is composable in ways that callbacks and state variables of native GTK aren't.
Have to agree with you on all counts. I forgot about parsing, I generally don't have to do that. The final project my group attempted was a simple game/simulator in Haskell and it was by all counts a pain to work around the statelessness. When we were trying to create an interface for the game we also noticed the lack of good GUI libraries.
I do pretty ordinary stuff and it makes my work a lot nicer. An open source example is the Elasticsearch client I wrote called Bloodhound: https://github.com/bitemyapp/bloodhound
It makes writing and maintaining applications that use ES a lot more maintainable. Makes the Query API a lot more discoverable too.
There's some interesting Perl 6 implementations here[1] (and in the discussion of that), which shows some interesting capabilities of multiple dispatch, and farther down using subtypes along with multiple dispatch.
I particularly like this terse one:
sub fizzbuzz($n, $_ = [$n % 3, $n % 5]) {
when [0, 0] { "fizzbuzz" }
when [0, *] { "fizz" }
when [*, 0] { "buzz" }
default { $n }
}
That's fairly tight, while remaining somewhat readable. There's a version that adds a line or two by moving $_ into the body of the sub as well in case that's more readable to some.
This praised Ruby code in article is crap. It abuses side effects(unnecessary 3 invokes of printing) and needs to resort to using helper (@printed = "Fizz"). I would never let something like this through code review.
Like you can't simply first generate data and then print it...
(array lookup with out-of-range offset returns nil, Array#join produces empty strings for nil elements, and accessing element 0 in a zero length string returns nil.
The "\r" in the format string for the number is carriage return. It ensures that as long as you print to terminal rather than redirect to file etc., the number gets overwritten if anything else gets printed on the same line.
Monoid is a concept that is older than category theory. And way simpler. I don't get why people always say category theory when they mean basic algebra. Maybe they don't know better (lack of math background, expository texts that go straight to "category theory")?
Introduction to category theory via haskell, and not learning the right areas of abstract algebra?
It's plausible to even do a degree in mathematics and never encounter the name 'monoid' - you'll obviously encounter the structure, but my abstract algebra classes just skipped ahead to groups, and then to rings and modules, instead of spending any time on the simpler structures.
Other times, like when studying formal languages, because regular languages are a monoid, the structure is introduced, via noting that you have an empty language and a method to combine any two languages, but not given the name monoid.
So you come across haskell, and it gives a name to things like monoids, monads, functors, and so on, and says that monads and functors come from category theory, and it seems like a valid inference to say that monoids also come from category theory.
Haskellers tend to think category theory is far more useful than it is, and non-Haskellers the complete opposite. On a forum of predominantly non-Haskellers I'm happy for them to get a bit of a overdose of the terminology.
(Edit from sillier comment.)
Then all you gotta do to add more prime numbers is stick them in the dict (or factorize composite numbers you want to put in). Which is more idiomatic in python than endless if elif elif elif elif.