User signup and authenticate with some oauth provider (google, facebook)
Very basic profile info (public user name)
Ability for user to select owned/listened to headphones for their profile
Ability for user to see others headphone profile (if public)
Ability to rank owned headphones among ranking categories. ( this will need refinement as tom says to describe sounds as universally as possible )
Ability to see basic ranking results of any single headphone
Admin features for users with the admin capabilities
Add/Edit headphones.
Add/Edit brands.
Basically all the edit interfaces for the Data models driving rankings and headphones. (exactly what this is will be based on the final data model design)
Any advanced profile/user management (not sure there is any initially).
This leads to a question of IEM vs OverEar. Most places keep these rankings separate. So itās hard to know if an IEM is competitive with some random over ear. I am not sure if they should be listed separately.
Also, to maintain a decent database of stats on headphones, I imagine ābrand ambassadorsā would have to be appointed as admins to keep their brands up to date. There is no way I could get everything entered unless there is a DB somewhere. And there is also the issue of revisions of the same headphone. Etc. Lots of quirks to deal with.
What I would really like to do is develop examples of sounds exaggerated so you can hear the difference on any headphone. And play the clip with āpoor soundstageā and āgood soundstageā. Etc. I think at least some of this could be done via that. But itās not something I could do easily.
I want to restate what I said for MVP above for basic requirements in a simpler way:
Users can rank headphones among the criteria.
Users can see basic results for a given headphones.
Admins can manage the headphone database.
Thatās it. Finito. Done. Hopefully do that in a fairly polished manner, laying the groundwork for future features, enhancements and new ideas. But also doing those in the simplest way possible.
In normal times, if I had a chunk of time off, I could accomplish a version of that myself. I used to do Rapid Prototyping of things like this on a regular basis. I just havenāt been able to generate the time (you know, since having kids).
End of last year, I took some time to do exactly this, but I ended up sick and in quarantine (not covid thankfully) and lost 3 weeks (due to staggered family quarantines) I could have worked on it.
This could be a worthwhile undertaking. Ideally, there could be an overall ranking as well as individual criteria ranking, and perhaps users can rank the importance of criteria (which drives such userās overall ranking). Good luck.
I really want to distinguish between criteria that in theory are somewhat objective, and appreciation of some criteria.
Most people seem to agree that the 800s and the arya have among the widest sound stage. But, that doesnāt make it likeable. As I heard more headphones, I liked the arya less because the soundstage seems disjointed in its width compared to the he6se.
I want to figure out how to capture both āholy crap it is wideā and āI really like thatā or āI donāt like thatā.
Havenāt completely figured out how to do that easily.
True. For soundstage, thereās a distinction between (i) width (wide vs narrow, and then one personās wide is another personās normal) and (ii) best/likable vs worst/not likable.
There are certainly things to be sorted out, of which youāre aware. I look forward to how this develops.
Honestly, I have been trying to escape the the apple ecosystem. I was this close to getting out. Problem: The android phone I purchased was recalled due to exploding. After that I had to migrate back and I was just done. Since then I just have had too many apple convenience things hold me to the platform.
It didnāt help that I have done (and occasionally still do) iOS development.
Most of the time, I just need my stuff to work. And I havenāt had great experience with that on other platforms (either pre or post switching to apple). Eventually Apple quality may drop enough that it will make the decision for me. But that hasnāt happened yet.
I am in on your idea. How about developing a list of reference tracks later in your development? I am thinking of a playlist that would have songs that would be good to assess various characteristics? E.g., how does the acoustic bass come across in So What from Kind of Blue?
Simultaneous tracks. If the ranking is simple ordered lists (which has been my assumption all along but maybe it shouldnāt be), what a list is for (which criteria) is separate from the implementation.
Whether a criteria is ācolorā or āsoundstageā doesnāt really matter as long as we can define the set of criteria based on final decisions.
Now, what is needed on a technical level is deciding if simple ordered lists are the right solution. Should it be tier lists? I say, probably not. The reason is tier lists have too much change over time. You may put a sundara in S tier until you hear an arya. Etc. While simply saying the arya is above sundara in soundstage is sufficient. This, of course, does not show the ādistanceā between them. But it seems to be more useful data from an analysis stand point.
As to what the criteria should be, SBAF has some really good definitions. I figured we should start there. I do think it should be relatively objective. But I would love some help figuring out the smallest useful and relatively objective definitions (and sound examples).
To me, this needs expert reviewers feedback. I can implement but I canāt define.
I was going to take the approach of building the functional example and then get Resolve and the others involved if they are willing for definitions. Basically figure out implementation first to demo.