Building our own headphone advisor site

Initial draft of MVP feature set:

User signup and authenticate with some oauth provider (google, facebook)
Very basic profile info (public user name)
Ability for user to select owned/listened to headphones for their profile
Ability for user to see others headphone profile (if public)
Ability to rank owned headphones among ranking categories. ( this will need refinement as tom says to describe sounds as universally as possible )
Ability to see basic ranking results of any single headphone

Admin features for users with the admin capabilities
Add/Edit headphones.
Add/Edit brands.
Basically all the edit interfaces for the Data models driving rankings and headphones. (exactly what this is will be based on the final data model design)
Any advanced profile/user management (not sure there is any initially).

This leads to a question of IEM vs OverEar. Most places keep these rankings separate. So itā€™s hard to know if an IEM is competitive with some random over ear. I am not sure if they should be listed separately.

Also, to maintain a decent database of stats on headphones, I imagine ā€œbrand ambassadorsā€ would have to be appointed as admins to keep their brands up to date. There is no way I could get everything entered unless there is a DB somewhere. And there is also the issue of revisions of the same headphone. Etc. Lots of quirks to deal with.

What I would really like to do is develop examples of sounds exaggerated so you can hear the difference on any headphone. And play the clip with ā€œpoor soundstageā€ and ā€œgood soundstageā€. Etc. I think at least some of this could be done via that. But itā€™s not something I could do easily.

1 Like

Thatā€™s a great idea. Maybe we start small and simple though and improve as we go? :wink::sweat_smile:

1 Like

I want to restate what I said for MVP above for basic requirements in a simpler way:

Users can rank headphones among the criteria.
Users can see basic results for a given headphones.
Admins can manage the headphone database.

Thatā€™s it. Finito. Done. Hopefully do that in a fairly polished manner, laying the groundwork for future features, enhancements and new ideas. But also doing those in the simplest way possible.

In normal times, if I had a chunk of time off, I could accomplish a version of that myself. I used to do Rapid Prototyping of things like this on a regular basis. I just havenā€™t been able to generate the time (you know, since having kids).

End of last year, I took some time to do exactly this, but I ended up sick and in quarantine (not covid thankfully) and lost 3 weeks (due to staggered family quarantines) I could have worked on it.

3 Likes

I started working on some boiler plate last night!

Will see if I can keep it up!

1 Like

This could be a worthwhile undertaking. Ideally, there could be an overall ranking as well as individual criteria ranking, and perhaps users can rank the importance of criteria (which drives such userā€™s overall ranking). Good luck.

2 Likes

Yep!

Gotta be careful about all the ranking.

I really want to distinguish between criteria that in theory are somewhat objective, and appreciation of some criteria.

Most people seem to agree that the 800s and the arya have among the widest sound stage. But, that doesnā€™t make it likeable. As I heard more headphones, I liked the arya less because the soundstage seems disjointed in its width compared to the he6se.

I want to figure out how to capture both ā€œholy crap it is wideā€ and ā€œI really like thatā€ or ā€œI donā€™t like thatā€.

Havenā€™t completely figured out how to do that easily.

1 Like

True. For soundstage, thereā€™s a distinction between (i) width (wide vs narrow, and then one personā€™s wide is another personā€™s normal) and (ii) best/likable vs worst/not likable.

There are certainly things to be sorted out, of which youā€™re aware. I look forward to how this develops.

1 Like

And the real challenge there, is how do we get the user to provide all that information without overwhelming them.

1 Like

Agreed. GUI and ELI5 as guiding principles, perhaps.

Thankfully I am an apple guy. And those come naturally to me. :wink:

Excellent! My first computer was an Apple IIe. Iā€™m not currently locked into one ecosystem (letā€™s call it being diversified, inefficient, I know).

1 Like

Honestly, I have been trying to escape the the apple ecosystem. I was this close to getting out. Problem: The android phone I purchased was recalled due to exploding. After that I had to migrate back and I was just done. Since then I just have had too many apple convenience things hold me to the platform.

Damn, explosions are to be avoided.

I finally made the escape a couple years ago and have been quite happy with Android and my OnePlus 7 Pro.

Apple convenience/integration is hard to give up.

1 Like

It didnā€™t help that I have done (and occasionally still do) iOS development.

Most of the time, I just need my stuff to work. And I havenā€™t had great experience with that on other platforms (either pre or post switching to apple). Eventually Apple quality may drop enough that it will make the decision for me. But that hasnā€™t happened yet.

Thatā€™s a big aside. Headphone site.

1 Like

Pros and cons, tradeoffs, as with most things.

Weā€™ll see how it plays out for me also. Iā€™m not entrenched in either one, yet.

1 Like

I am in on your idea. How about developing a list of reference tracks later in your development? I am thinking of a playlist that would have songs that would be good to assess various characteristics? E.g., how does the acoustic bass come across in So What from Kind of Blue?

1 Like

I would actually like to go a step further and create samples (probably synthetic) that explicitly demo X vs Y.

I think many things could be simulated to an extreme. But if not, then samples will have to be it. ā€œHere are tracks we recommend you listen toā€.

All in on that. And a list of tracks (with time references) could easily be included on day 1 if we can curate them.

How about we start with defining the MVP including the evaluation criteria? Gonna be hard enough as a first stepā€¦

Just saying, letā€™s start very small with outlining the requirements. :wink:

Simultaneous tracks. If the ranking is simple ordered lists (which has been my assumption all along but maybe it shouldnā€™t be), what a list is for (which criteria) is separate from the implementation.

Whether a criteria is ā€œcolorā€ or ā€œsoundstageā€ doesnā€™t really matter as long as we can define the set of criteria based on final decisions.

Now, what is needed on a technical level is deciding if simple ordered lists are the right solution. Should it be tier lists? I say, probably not. The reason is tier lists have too much change over time. You may put a sundara in S tier until you hear an arya. Etc. While simply saying the arya is above sundara in soundstage is sufficient. This, of course, does not show the ā€œdistanceā€ between them. But it seems to be more useful data from an analysis stand point.

As to what the criteria should be, SBAF has some really good definitions. I figured we should start there. I do think it should be relatively objective. But I would love some help figuring out the smallest useful and relatively objective definitions (and sound examples).

To me, this needs expert reviewers feedback. I can implement but I canā€™t define.

I was going to take the approach of building the functional example and then get Resolve and the others involved if they are willing for definitions. Basically figure out implementation first to demo.

1 Like