I've done some work on compression really long ago but I am very far from an expert in the field, in fact I'm not an expert in any field ;) The best I ever did was a way to compress video better than what was available at the time but wavelets overtook that and I have not kept current.
I'm curious about two things:
- is it really that much better (if so, that would by itself be a publishable result) where better is
- not worse for other cases
- always better for the cases documented
I think that's a fair challenge.
- is it correct?
And as a sidetrack to the latter: can it be understood to the point that you can prove it is correct? Unfortunately I don't have experience with your toolchain but that's a nice learning opportunity.
I thought this too. If they're using the camera to do brightness, it needs to be on when the user isn't using it - if the activity LED is tied to the camera power rail (not sure if it is), it might look like there's something nefarious going on. No way Apple would let that go out the door.
Yes, but as a producer I would like to have more simplistic generations such as "Generate me 15 variations of a kick that sounds like X", I think stuff like this would be much more useful.
Using a webcam, monitor finger movements and find mistakes (using some sort of AI video analysis) to help user figure out how to improve. It's a hard thing to build but if you build it there is going to be paying customers. You can even sell hardware and subscriptions with it. Lots of schools want this!
beats the best compression out there by 6% on average. Yet nobody will care because it was not hand written
reply