See step 1 of the 3 step bulleted list, but yeah, it generalizes to 2 steps with a shared initialization value.
Maybe worth noting that if your world has some known starting state (eg after map load), it might be worth the client and server using that as the implicit initial state, even when a client joins a game in progress.
I hear you. It's hard to know what a good benchmark would be. Do you have any ideas for what you'd like to see measured and possibly what it's compared against?
Note: that relies on doing proper sRGB -> linear instead of naive gamma conversion, unless I'm completely mistaken:
In [16]: math.log2(1./colour.models.oetf_reverse_sRGB(1.0/256.0))
Out[16]: 11.6915341649192
In [17]: math.log2(1./((1.0/256.0)**2.2))
Out[17]: 17.6
In [18]: math.log2(1./((1.0/256.0)**1.8))
Out[18]: 14.4
Although the stuff you will be losing even if you use 2.2 gamma with 12 bit depth should be fairly minimal if I'm interpreting the numbers correctly
Does this source code help you much with that? It only deals with wave files but can read / write 8, 16, 24 or 32 bit wave files, at whatever sample rate, with however many channels.
I really wish someone would make a header only C++ audio library, that would be soooo nice.
Wave files are pretty easy to deal with because the format is simple and the data isn't usually compressed. It's all the other formats that make this hard. Actually, it's probably not that hard, but parsing file formats isn't really a fun programming task (for me at least).
I'm the author and didn't make this post on yc, but yes, the implementation is in 680 lines of standalone c++
The point of the article isn't about c++ or why it's a good language for doing this sort of thing, but I'm a real time graphics and game engine programmer, so it's my language of choice.
does adding that little "_t" hurt you that much ? think that now every programmer reading your code will wonder "hmm... what is that uint16 type ? is it equivalent to uint16_t ? is it some weird macro designed to accomodate 1990 compilers ?" etc etc.
> const char *fileName
for the love of all things holy, use std::string_view for non-performance-critical stuff like this.
More generally, the only remotely C++-like thing in your code is the use of std::vector. The rest is honestly more C than C++. This shows for instance in SampleChannelFractional with that ugly #if 0. Proper C++ design would have SampleChannelFractional be instead a function object that you could pass as template argument: this way, the user can choose which implementation he wants without requiring a recompilation, and without indirection cost.
In addition, if you change some parts of your code, the C++ compiler will be able to check both code paths directly.
That is :
struct LinearSampleChannelFractional
{
float operator()(const std::vector<float>& input
, float sampleFloat
, uint16 channel
, uint16 numChannels)
{
// your linear implementation here
}
};
struct CubicSampleChannelFractional
{
float operator()(const std::vector<float>& input
, float sampleFloat
, uint16 channel
, uint16 numChannels)
{
// your cubic implementation here
}
private:
float CubicHermite (float A, float B, float C, float D, float t)
{
// encapsulate it here:
// the rest of your code does not care about this function.
}
};
// First modification: pass Fractional as template argument
template<typename Fractional>
void TimeAdjust (...)
{
// replace SampleChannelFractional:
output[...] = Fractional{}(input, srcSampleFloat, channel, numChannels);
}
template<typename Fractional>
void SplatGrainToOutput (...)
{
// same
output[...] = Fractional{}(...);
}
// Second modification: refactor this in a Granulator class of some sorts...
// and use the standard C++ naming convention
template<typename Fractional>
class granulator
{
public:
void time_adjust (...)
{
// uses the class template argument
output[...] = Fractional{}(input, srcSampleFloat, channel, numChannels);
}
void splat_grain_to_output (...)
{
// same
output[...] = Fractional{}(...);
}
void granular_time_pitch_adjust (...)
{
// ... splat_grain_to_output(...)
}
int main()
{
// now both kinds can be used at the same time in your code, and both will be
// just as efficient as with the #if 1 ; for instance the choice
// of the granulator to use can then be part of a configuration option
// in a GUI software
constexpr granulator<cubic_sample_channel_fractional> gran1;
gran1.time_adjust(source, out, numChannels, 0.7f);
constexpr granulator<linear_sample_channel_fractional> gran2;
gran2.time_adjust(source, out, numChannels, 0.7f);
}
One other minor nitpick: know the difference between vector.resize vs vector.reserve. If all you're doing is copying new data into the vector after sizing it, use reserve. This avoids default-constructing all of the values inside of it, only to overwrite them with the new values you're copying in. In the case of primitive types it's probably not a big deal, but it's still doing a second pass over the data just to set it to zero before copying the contents of the file.
> This avoids default-constructing all of the values inside of it, only to overwrite them with the new values you're copying in.
I think that I saw a few benchmarks once showing that for primitive types such as int & such, it was actually more efficient to resize() ; only past 32 or 64 bytes structs did reserving' become more interesting. In any case, when nitpicking on this it's even better to use boost::vector which allows to do an uninitialized resize : http://www.boost.org/doc/libs/1_66_0/doc/html/container/exte....
That's interesting, and a bit counter-intuitive IMO. But I guess that's why we measure things. The reserve operation is essentially just a single reallocation, whereas the resize is a reallocation plus a bunch of zeroing out. Strange.