Mannykannot is correct that the full coefficient is dependent on the machine and exact code. However, we can make some refinements based on the algorithms alone.
Bubble sort is n² / 2, as it processes one less element per pass. This is a pretty common pattern, so it's good to remember.
Insertion sort starts at zero elements and processes one extra element per pass. So it's also n² / 2. However, on average each pass only needs to process half the elements in order to find the correct insertion point. So it's actually n² / 4. If you use a binary search, it becomes n log n -- n passes, each perform a log n binary search. (This is usually called binary insertion sort.)
However, those numbers are in comparisons, which is typically what sort algorithms are measured by. It assumes that all memory operations are constant time. But you might also want to keep track of swaps or other memory movement. For instance, when you add the memory swaps back to the binary insertion sort, it's O(n²) again, because it still has to perform an average of n / 2 swaps on every pass -- O(n * (log n + n / 2)) = O(n²).
Choice of data structure can also make an impact. Insertion sort is not awesome on memory arrays, because it has to perform a block copy for every insert. However, on linked lists, that single-element shift happens for "free" when you perform the insert.
I think you will find it to be data-, implementation-, language-, and platform-dependent. You could consider implementing versions of each in a compiled language and look at the disassembled machine- or byte-code generated for them. If your target processor is pipelined and/or does branch prediction, speculative execution, caching or virtual memory, these could all affect the relative performance on a given input.