Towards Hive Learning In World Music

9. Hive Learning: Towards A P2P Music Visualization Aggregator Platform

Distributed Teaching And Learning In World Music

This blog is slowly gravitating towards the simpler (but in their own way perhaps less easily quantified) human aspects of this topic.

Before doing so, however, I need to outline some of the benefits and limitations associated with the two main music visualization technology stacks under review: SVG+CSS, and Web3D (the generic, or cover name for WebAR and WebVR).

As highlighted in my last (eighth) post, there are some hard trade-offs between SVG+CSS’s highly structured, data-literal approach to music visualization, and WebGL’s more figurative -if 3D navigable- ‘electronic arts’ approach.

This choice hinges on interface-level, on-demand data bindings, which, though critical as an aid to visual association and hence musical understanding, cannot be achieved in fast, responsive WebGL-based solutions without sacrificing 3D navigability.

Put the other way round, WebGL’s volumetric, 3D modeling and navigation is simply not consistent with direct data interrogability.

Thanks to the hype surrounding WebAR and WebVR, this simple fact is often obscured. The kind of data transparency WebGL-based solutions still struggle (and often fail) to achieve have long been routine using SVG+CSS - yet the pressure to go with the flock is considerable.

Recent data-driven visualization libraries focussed around the strengths of the browser DOM, SVG and CSS have put phenomenally flexible and potent visual modeling tools at our disposal. As can be inferred from the following screenshot, these ‘lift’ data right into the graphical elements, leaving them bristling with information.

The cherry on the top? CSS gives us complete control over styling homogeneity, and with it, branding.

Intuitive Visual Learning Via Interactive, Interchangeable Score-Driven Instrument, Theory And Physical Models

Nevertheless, despite recent ‘industrialization’ or ‘commoditization’ of data visualization, data-driven, timeline-based and fully personaliseable domain-specific aggregator platforms remain very much the exception.

SVG+CSS and WebGL-Based Approaches Compared

These two approaches can be compared at two levels:

  • technical capabilities (a little banal)
  • wider opportunities for value creation (gets interesting)

Effective comparison is a little obscured by the distinctly different types of WebGL environment, ranging from the heavily pre-processed (e.g. Unity workflows which -in the name of platform personalization and responsiveness to on-demand source data- we can discard) to the programmatic (declarative + functional, e.g. React VR with D3.js).

Let’s try to identify some of those value creation opportunities. Working out from the central hexagon (describing the positioning of most current online music solutions) of the following diagram, with SVG+CSS, we gain:

Levels of Platform Sophistication, Working Out From Center
  • Direct person-to-person musical dialog through software-coordinated P2P interworking.
  • Creative freedom through flexible selection and transformation mechanisms.
  • Configuration freedom and diversity at multiple levels
  • Ultimately, AI-assisted fingering diversity across any world instrument.
  • Data transparency, supporting multiple dependent animations
  • Direct dependent animation synchronization - in lockstep

It is difficult to find anything like this level of ‘usefulness’ or ‘fitness for purpose’ in WebGL-based approaches. There are just too many practical impediments:

  • P2P interworking is more focussed on spatial naviagation — to the disadvantage of data binding.
  • Comprehensive, parametrized shape modeling (visiting all model permutations by ‘generative’ means, and which is central to instrumental and theory tool diversity) is far more difficult to achieve.
  • Support for AI integration (and especially generic, reusable AI solutions) is limited by the lack of actionable data.
  • Communication and synchronization between musical source and dependent animations is more difficult to orchestrate.

Perhaps most important of all, visual music learning across any or all of notation, instrument, music theory, score analysis or physics models hinges both on hardware-controlled synchronization (to avoid visual drift) and on direct, interrogative access to the underlying score or audio-derived data (this to ‘feed’ dependent animations).

Both imply direct data-to-GUI element bindings, which are much easier to achieve using SVG+CSS than in WebGL-based environments. I cannot emphasize the impact of this enough. Data. Is. All.

Hop Out Of The Box, Dude

At the heart of online teaching potential, then, is a canny choice of visualization library.

The DOM (SVG+CSS) approach is much the more flexible. Moreover, it is proven (has been around for almost two decades), and, where CSS (or a WebGL hybrid approach such as with Stardust.js) can be used to implement transitions, it is fast.

Above all, it can be used now, and, given it’s consistently data-driven focus, is ideal as an integration stack for public domain artificial intelligence applications.

Some Technologies Supporting Visual Leverage Of Artificial Intelligence

Given free rein, these have the potential, in an increasingly work-free society, to return or transform music, comparative musicology, ethnomusicology and dance into something of a Volkssport, a free-time occupation for idle hands and minds — with profound impact on our social connection and wellbeing.

Though certainly not representing some ‘universal theory of music’, the platform in focus here will allow open-source developers to model wide-ranging musical behaviors, and, where a match, have these automatically associated with -and driven by- a musical source. It will allow users to connect, share and interwork.

Quite Some Buzz Around Musical Hive Learning

Hive learning: a distributed musical dialog amongst peers, with visual context, immediacy and immersion for instruments and theory tools long held in static lockdown. Latency? Impacts all peer-to-peer teaching and learning, but is more than eclipsed by the benefits.

Musical hive learning. A galaxy of INTERACTIVE, SCORE-DRIVEN instrument model and theory tool animations is born. Entirely graphical toolset ultimately supporting P2P world music teaching and learning via video chat ◦ Paradigm Change ◦ Music Visualization Greenfield ◦ Crowd Funding In Ramp-Up ◦ Please Share

The platform addresses the disconnect between MusicXML (the slowly expanding W3C standard for music exchange and the foundation for much online notation), music visualization, world music instrument and theory tool modeling and user’s immediate learning needs.

Crowd-Funding Landing Page

Together with the sibling blog: The Visual Future Of Music, this publication forms a backdrop to a crowd funding campaign with the potential to unleash a galaxy of directly source-driven, world-instrument, music-theoretical and score analysis animations in a single, online platform.

The aim is an embeddable SVG-modeling and largely CSS-animation toolset for ethnic teacher-musicians, bringing a vastly improved musical diversity to online teaching. Compared to equivalent WebGL approaches, this is simplicity itself.

Everything Begins With A Base, Generic Model (Here, A Lute Family Instrument)

Each instrument family’s configuration permutations are generated from a generic model. Open-source developers create the dialogs and interactions allowing layered customization towards sometimes thousands of fully configured instrument permutations.

Instruments are configured layer by layer, typically in half a dozen steps, in much the same order as they might be constructed by a craftsman.

Configuration workflows are separate from actual usage, and are likely to be subject to community moderation. (They could be automated, but not every combination would make musical sense).

Here we see the number of courses, notes or tones to the octave, just intonation, tuning and scale length configured for a Turkish bağlama or saz.

Here (early proof-of-concept, no beauty contestant) a screenshot from a typical configuration dialog. The key advances are internal, layout cosmetics by comparison trivial.

Mockup Of Bouzouki Configuration, With Some Common Tunings

Each unique model is saved for use by the community, individual models foreseen as embeddable in user’s own websites.

Whether an instrument model, a theory or score analysis tool, or a model providing insights into related disciplines, the workflow is identical. This keeps the core platform code base extraordinarily compact, and the overall performance close to that of conventional ‘single-instrument’ solutions.

Platform Population Process. (Applies To Any Type Of Model: Instrument, Theory Or Analysis Tool)

Simple strategies underpin progressive modeling enhancement:

  • Models are defined at two granularities: shape (form) and musical properties (function). Shape of form classification hierarchies double as model repositories, musical properties or function being saved as subtrees representing specific musical configurations. This holds true as much for instruments as theory or other tools.
  • musical property mix-ins, leading to reusable instrumental configuration, with full control over scale or channel lengths, temperaments and intonations, number of notes to the octave (and, for just intoned systems, which notes contribute to the whole), number of courses or channels, tunings, capoeing and so on.
  • for each instrument family, robust, online and visual instrument configuration.

Musical mix-in properties not only build on each other, but facilitate reuse across both instrument families and theory tools — the latter in multiple dimensions.

Musical Property Mix-Ins: Reuse On Steroids

Property mix-in logic implemented for one instrument is then available for all others, including instrumental hybrids such as the melodica (wind+keyboard) or guitar-lira (harp+guitar).

All this breaks out of the data-impoverished, bitmapped strategy followed elsewhere, the aim here being fully responsive data-driven music visualization. Many interface components (instrument fingerings, theory tool node and intervals, score analysis patterns) can be brought to life using CSS, these animations being fast.

CSS Is Fast

Where CSS is not up to the job, hybrid SVG/WebGL 2D solutions can be applied.

The ultimate goal are direct, in-browser, peer-to-peer music teaching and learning dialogs — much as might be experienced in face-to-face workshops, or as was widespread in earlier times. These are based entirely on shared and highly configurable models, technically on a par with existing ‘off-the-shelf’ data visualizations, but freely exchangeable.

In addressing three major deficits in current music visualization (the lack of immediate musical context, a failure to reflect cultural diversity, and little data transparency), this platform can lend the many pretty, standalone and currently data-impoverished music visualizations returned by Google search some real value.

Moreover, it provides an end-to-end integration stack for a wide range of AI applications, and promises flexible and comprehensive data and modeling input to WebAR and WebVR applications.

Perhaps the central benefit is the elevation of data right into the graphical interface elements themselves, making it directly accessible (for example as a CSS tooltip) and interrogable (via configurable selection scopes) at various granularities. This level of interaction can be expected to have profound impact on learning immediacy and immersion.

Cultural Continuity: Lies In Our Hands

Short of a much more challenging move to WebGL-based solutions (eg WebAR and WebVR), this represents the rather obvious but as yet unexploited DOM-based route to immersive music diversity.

Crowd-Funding Landing Page

FacebookPinterestTumblrYouTubeVimeoMediumBlogger

Project Seeks Sponsors. Open Source. Non-Profit. Global.

#VisualFutureOfMusic #WorldMusicInstrumentsAndTheory

--

--