Cultural Heritage Accessibility through Multisensory Interactions - Updates from the MuseIT project
As the MuseIT project (Multisensory, User-centred, Shared Cultural Experiences through Interactive Technologies) reaches its final stage, this article offers an overview of its key achievements in promoting accessibility and inclusion in cultural heritage. Funded by the Horizon Europe programme, MuseIT brought together a diverse consortium of researchers, technologists, and cultural stakeholders to develop innovative tools for inclusive digital culture. By leveraging multimodal interfaces, immersive technologies, and AI-based accessibility solutions, the project enabled more meaningful cultural experiences for individual swith sensory, cognitive, and physical disabilities. This contribution highlights the main outcomes of MuseIT, emphasizing its interdisciplinary methods, community involvement, and ethical considerations, while laying the groundwork for future progress in inclusive technology and cultural engagement; a particular focus is dedicated to the event organized within the framework of the project, held in Rome on May 14, 2025, at the National Central Library of Rome, titled “Heritage Without Barriers“.
Introduction
Note on Authors 1
Multisensory, User-centred, Shared Cultural Experiences through Interactive Technologies (MuseIT)2 is a three-year project (2022-2025) funded under Horizon Europe, coordinated by the University of Borås, Sweden. The consortium comprises around 70 members across 12 partner organisations in nine countries. The project aims to advance inclusive and multisensory access to cultural heritage through participatory design, multimodal technologies, immersive experiences, and promotion of improved policies.
An earlier overview of the project’s initial activities and perspectives was published in DigItalia in 20233.
As the MuseIT project nears its conclusion, by this report, we reflect on a journey defined by innovate on, inclusion, and interdisciplinary collaboration. Funded under the Horizon Europe framework, MuseIT set out with an ambitious goal: to make cultural heritage more accessible and inclusive through the use of advanced technologies, particularly for and with individuals with sensory, cognitive, and physical disabilities.
Since its inception, MuseIT has brought together researchers, technologists, artists, and stakeholders from across Europe to reimagine how people experience and engage with cultural content. Through a combination of cutting-edge solutions –including multimodal interfaces, immersive experiences, and AI-driven accessibility tools– the project has developed a comprehensive approach to inclusive digital culture.
This report outlines the main achievements of MuseIT across its core areas. From novel interaction paradigms and sensory translation systems to community engagement strategies and ethical frameworks, each section reflects our commitment to broadening access and participation in the cultural sphere. As the project draws to a close, we present these results not only as outcomes, but as foundations for future initiatives in inclusive technology and digital heritage.
Understanding user needs – a Participatory and Inclusive Approach
At the core of MuseIT’s methodology is a firm commitment to participatory design and inclusive research. From its outset, the project recognised that the meaningful inclusion of people with disabilities in the development process is essential –not only to ensure accessibility but to empower users as co-creators of cultural experiences. To this end, MuseIT has adopted a participatory co-design approach, working closely with potential users to shape technologies and experiences that genuinely meet their needs.
This approach has been grounded in established frameworks such as Universal Design and Inclusive Design and has actively involved users at different stages of the design and development process. Through over 60 participatory engagements –including workshops, interviews, symposia, and collaborative fieldwork– the project has strived to form an informed understanding of user needs across diverse groups and contexts.
MuseIT’s engagements were guided by values of equality, empowerment, and democratic collaboration. Participants contributed insights into the barriers they face when engaging with cultural content, and helped co-design solutions using multisensory and digital technologies, including virtual reality (VR) and AI-driven interfaces. These interactions have not only informed technical development but also enhanced awareness within the consortium of the lived experiences of the user communities.
In particular, co-design sessions allowed users to directly shape design concepts, test early prototypes, and propose refinements. This collaborative process has helped ensure that the outcomes are not only technically effective but socially meaningful and emotionally resonant. The participatory methodology also reinforced the project’s commitment to ethical research practices and the responsible development of inclusive technologies.
A key component of this process involved an interview study with individuals from the project’s target user groups, which illuminated both challenges and opportunities in engaging with cultural content. Participants described difficulties accessing cultural experiences –both physical and digital– due to poorly designed environments, lack of sensory accommodations, or overly complex navigation structures. Many emphasised how cultural spaces often default to “normalised” experiences that exclude or overlook diverse modes of perception and interaction.
At the same time, participants expressed a strong desire to connect with culture in ways that reflect their abilities, interests, and preferences. Opportunities identified included the potential of multisensory media, such as touch, sound, and movement-based interactions, to enable more immersive and meaningful experiences. Others noted the empowering role of co-creation itself: not only as a means of shaping technology, but as a form of cultural participation in its own right. These insights highlighted once again the importance of designing with –not just for– users, and reinforced MuseIT’s mission to foster inclusive cultural futures through collaborative innovation.
While MuseIT has made significant progress in developing inclusive digital solutions, it is important to acknowledge that several of the technologies produced remain at the prototype stage. Not all tools are currently fully accessible, and further refinement is needed to achieve universal usability. However, the project has established a solid conceptual and technical foundation; building blocks that can be improved, integrated with other systems, or scaled in future efforts. These prototypes4 serve as proof of concept, offering valuable insights and demonstrating clear pathways for how participatory design and inclusive technologies can evolve together to support broader accessibility goals. Next to the prototypes, MuseIT has embraced standard solutions for research data management and long-term digital archiving, resulting in various Dataverse-based repository instances which will be hosted by consortium members also after the duration of MuseIT.
Technical Innovations for Inclusion
Building on the insights gained from participatory work, MuseIT has developed several innovative tools that translate inclusive design into technological solutions. Below, we outline some of these developments.
HaptiVerse: Enabling Meaningful Communication Through Touch
One of the technologies developed in the project is a system of interrelated components that enable design of and communication of meaning-bearing haptic messages. The HaptiVerse system5 represents a novel approach to haptic communication, developed with a focus on inclusion, accessibility, and co-design for use with and by the deafblind community. Building on existing practices such as Tactile Sign Language and Social-Haptic Communication (SHC), HaptiVerse introduces a modular framework for creating, transmitting, and receiving meaning-bearing haptic signals. These signals can be used to convey structured information through touch –something especially vital for individuals with dual sensory impairments. At its core, HaptiVerse consists of several interconnected components: HaptiDesigner (a tool for designing haptic patterns or “haptograms”), HaptiMux (a routing and communication hub), HaptiBoard (hardware for actuator control), HaptiMesh/HaptiWear (wearable interfaces), and HaptioTek (a shared, expandable library of haptic signs). This integrated ecosystem supports real-time, remote, and one-to-many communication, enabling individuals to receive haptic messages even without direct physical contact with a human sender.
The system’s flexibility is a defining feature. It accommodates different user needs–whether they wish to design custom patterns, integrate their own devices, or simply stream haptic content. HaptiVerse is designed not only for assistive communication but also for experimentation and co-creation in broader contexts, such as cultural settings, education, and research.
Importantly, the development of HaptiVerse has been grounded in participatory design, informed by lived experiences of people with deafblindness. Co-design sessions and field testing helped identify key challenges, such as the need for low-latency communication, modular hardware, and intuitive pattern design. These insights have directly shaped the system architecture and usage scenarios.
While HaptiVerse is still in an experimental phase, its modularity and open design make it a promising foundation for future work. As the field of haptic communication matures and standards evolve, the system is built to adapt. It also lays the groundwork for a growing, community-driven haptic vocabulary-a step towards what may eventually become a tactile language.
Music Generation Based on the User’s Experience in the Frontend
The goal is to create a personalized musical piece that reflects both the user’s mood and the cultural heritage assets they explored on the MuseIT platform.
As the user navigates through the MuseIT interface, they can explore various cultural heritage assets. There is an option to activate their camera by clicking a button, which starts capturing their facial expressions to detect mood. The user can stop the recording whenever they choose.
Once the recording stops, the system processes the video using a machine learning model that analyses facial expressions and predicts the user’s mood over time. At the same time, the system keeps track of the artefacts the user visited. These are sent to the MuseIT Knowledge Graph, which provides meaningful insights about each artefact. This combined data –user mood and visited artefacts– is then sent to a music generation tool that creates a custom musical piece representing the user’s overall experience. The generated music is influenced by two main factors. First, the emotional tone of the music is shaped by the user’s mood. For example, if the model detects mostly happy expressions, the music follows a major key with uplifting characteristics. For sadder moods, the music shifts to a minor key, producing a more melancholic feel.
Second, the cultural context of the artefacts influences the musical style. For example, if the user explored a modern art artefact, the music might include electrical instruments, or if the artefacts are from the ancient period, the composition may use instruments like the lyre or aulos. By blending these two aspects-emotion and cultural context-the result is a unique, personalized musical piece.
This technology has been tested in the first pilot of the MuseIT project in Rome. The overall feedback was really promising: people enjoyed the experience and found it engaging. Many users mentioned that the music captured their overall experience exceptionally well, and they expressed a strong interest in trying it again. However, there is still room for improvement to further enhance its usability and impact.
Haptification and sonification of artworks
The sonification and haptification track for MuseIT is based around the translation of artwork to physical sensation. By using computer vision models to extract concepts from the paintings, using nearest-neighbour algorithms to simplify colors to a predetermined scheme and mapping them with color, further by haptifying concepts and contours, we aim to give participants with limited sight an impression of what is in the painting.
For inspiration for color conversion, we looked at Scriabin’s color circle, in which the composer associates musical notes with specific colors. For instance; the color red is associated with the note C, with lighter shades of red responding to a higher note. Traditionally, 12 notes are associated with shades of color –by identifying 5 shades per color, we distinguish 60 unique colors and associated notes. By converting each pixel in the artwork to a shade of these colors, and loading the result into the Picture 2 Notes application6, a user can explore the color structure of the painting through hearing rather than vision.
A haptic mode is also supported; we express the current pixel hovered over by means of an RGBA frequency, which manifests on three actuators activating on a connected haptic device. This allows one to sense by vibration whether a color is red, or green, or blue, with the intensity of the vibration corresponding to the proximity of the color to pure red, green, and blue.
Conceptualization of the contents of a painting was done by a computer vision model, which describes the contents of the paintings in detailed text. This data was stored and publicly published in the MuseIT Dataverse7, which makes this effort FAIR-compliant8.
After generating the data, Haptiverse’s backend was used to stream the data to a connected vest. By converting the detailed text to Haptic Subject Index9, an innovative approach to use simple language to describe features on paintings, it is possible to transmit the haptic patterns (which take the form of a series of four clock faces, where each of the eight respective compass directions in a clock face correspond to a single content feature), conceptually describing what is in the painting. Finally, after the contours of shapes in a selected painting were extracted programmatically, we colored them in a single pure color (which is rare in paintings). This approach allows one to express them haptically as well. By drawing the cursor closer or further away from shape contours, the respective haptic intensity of the device is increased vs. decreased, allowing one to sense this kind of information by vibration.
Multimodal Transformations with Generative tools
The CUBE-MT10 dataset and benchmark was designed as part of the project to support more inclusive access to cultural heritage using generative AI. It includes artefacts from three cultural domains –cuisine, art, and landmarks– across eight culturally diverse countries: Brazil, France, India, Italy, Japan, Nigeria, Turkey, and the USA. For each artefact, we created six different formats –images, text, speech, music, Braille, and 3D models– to meet a range of sensory and cognitive needs, especially for people with disabilities.
The content was generated using multimodal AI tools from Hugging Face, including models like Stable Diffusion11 (images), FastSpeech 212 (speech), MusicGen13 (music), and Hunyuan3D14 (3D models). All these models can generate multimodal assets based on prompts and other forms of input: for example, Stable Diffusion can create images from textual prompts, whereas Hunyuan3D can generate printable 3D representations using images as input. This setup allowed us to produce diverse and culturally grounded representations efficiently. Each modality serves a different purpose. Images and 3D models offer helpful visual and tactile cues for people with language or literacy challenges. Speech and Braille support users with visual impairments, while music can convey cultural context when other forms are less accessible.
To evaluate the dataset, we ran a workshop with people with aphasia, a condition that affects language. Participants engaged with the artefacts one modality at a time. Images and 3D models were particularly effective, helping users recognise and interpret content without needing to read or listen. The feedback highlighted the value of multimodal content in improving accessibility. The full dataset is openly available and integrated into the MuseIT platform for further research, design, and inclusive cultural applications.
Figure 1. Image from a workshop focused on people with aphasia for evaluating the usefulness of multimodal representations
Virtual museum and affective computing
MuseIT Virtual Museum is a fully immersive and multisensory eXtended Reality (XR) experience exhibiting cultural assets. This digital collection of artefacts was developed in collaboration with the Central Institute for the Union Catalogue of Italian Libraries (ICCU) of the Italian Ministry of Culture. Within the simulated and multi-thematic gallery, faithfully mirroring the scale, layout, and visual detail of a physical exhibition, visitors don Virtual Reality equipment and navigate freely among digital 2D and 3D representations of the exhibits. The experience is enhanced with audio and experimental haptic interactions, which not only enrich the user’s museum experience but mainly, they facilitate accessibility. Various approaches for enhanced accessibility to the content of the virtual museum are applied, based on the Centre for Research and Technology Hellas (CERTH) proposed framework15.
We explored how VR technology can enable remote access to cultural experiences by creating high-quality virtual environments that closely resemble the physical world. Advanced 3D graphics render these spaces with high realism, while multisensory features (audio and haptic) reinforce inclusivity. Concurrently, an affective-computing protocol captures participants’ emotional responses throughout the self-paced VR session. Each visitor wears a non-invasive electroencephalogram (EEG) device to monitor brain activity and a wrist-worn sensor to record heart rate and skin conductance, ensuring that physiological data align precisely with every interaction in the virtual space. After the session, recorded signals can be analysed to reveal how individuals emotionally engage with digital heritage environments. This methodology applies neuroscience and biomedical sensing to cultural heritage, moving beyond traditional surveys to uncover subtle, non-conscious reactions. The resulting qualitative assessment can inform adjustments to haptic intensity, audio pacing, and spatial design, thereby enhancing emotional resonance and universal accessibility in future iterations of the MuseIT Virtual Museum.
Figure 2. A participant experiencing the Virtual Museum workstation during the Pilot 1 event demonstration in Rome
A Knowledge Base for Arts and Inclusion
As demonstrated above digital technology is increasingly able to capture aspects of multimodality. But, to document multimodality aspects of cultural practices and products for the long-term remains a challenge. To create a repository which both contains artistic products from the performing arts in their multimodal nature but also supports the access to those digitized traces was another aspiration of MuseIT. For the collection of the ShareMusic16 which entails a wide range of objects from scientific articles to recorded performances, a repository solution (based on the open-source Harvard Dataverse17 archival software) was created, equipped with specific ontologies and integrated into the webspace of information around ShareMusic.
Figure 3. Snapshot of the new user interface to the Knowledge Base for Art and Inclusion, <https://knowledgebase.sharemusic.se/>
AI based tools and workflows have been tested and implemented for the co-design of the Knowledge Organisation System18 at the backend, but also for a web interface which meets various user needs. By choosing an Open Science based Information engineering pathway, the forward compatibility of the current solution is guaranteed while at the same time, standards in long-term preservation such as FAIR principles are adhered to19,20.
Engagement Activities
MuseIT has adopted a holistic approach, recognising that technological development alone is not sufficient to address the complex challenges of accessibility and inclusion. Instead, the project combines innovation with broader societal engagement. In addition to developing advanced technologies, MuseIT has actively contributed to policy studies, formulated actionable recommendations, and facilitated roundtable discussions. The project also prioritises awareness-raising, community engagement, and widespread dissemination of information to ensure that inclusive practices are understood, adopted, and sustained across sectors.
Building on this approach, the project has actively incorporated user engagement through co-creation and co-design methodologies and organised a wide range of engagement activities to involve users, experts, and stakeholders at multiple levels. These stakeholders include cultural institutions, disability organisations, researchers, technology providers, policy bodies, and user communities with whom the consortium had already established long-term relationships prior to the project. The network has since been broadened through targeted outreach, public events, and communications, ensuring diverse perspectives are represented. Our holistic approach and extensive user engagement efforts have not only supported the co-creation of inclusive technologies but also helped bridge the gap between research, policy, and real-world practice.
MuseIT has carried out more than 60 user engagement activities, spanning a wide spectrum of formats and levels of participation. From awareness-raising events and public symposia to in-depth co-design workshops, policy dialogues, and collaborative fieldwork, the project has actively involved diverse user groups, experts, and stakeholders. By drawing on established models, MuseIT has ensured that users are not merely recipients of information, but valued contributors and co-creators. These activities have also helped build internal capacity and a cross-disciplinary understanding within the consortium through hands-on learning and interdisciplinary dialogue. An outline of one such engagement activity is provided in the next section.
As the project approaches its conclusion, several significant events remain. These include the final MuseIT symposium, “Beyond Boundaries — Multisensory Innovation for Inclusive Futures” which took place in two parts on 2 September 2025 at the University of Borås and on 3 September at Röhsska Museum in Gothenburg. Following that event, the project team also took part in the Digital Heritage International Congress 202521, and hosted the second ACM Europe Summer School on Accessible and Inclusive Technologies22, which took place in Borås on 16–19 June 2025. These final activities have subsequently extended the project’s reach, deepened collaboration, and highlighted its contributions to inclusive, human-centred innovation.
Heritage without barriers: Pilot1 event demonstration in Rome
One of the most engaging recent moments of public outreach was the event held in Rome on May 14, “Heritage Without Barriers. The MuseIT project for an Inclusive Cultural Experience through Technology”. The initiative was organized within the framework of the MuseIT project in collaboration with the Italian Ministry of Culture (MiC) – Central Institute for the Union Catalogue of Italian Libraries (ICCU) and the National Central Library of Rome.
The day began with institutional greetings from Paola Passarelli, General Director for Libraries and Cultural Institutes. This was followed by remarks from Stefano Campagnolo, Director of the National Central Library of Rome, and Giuliano Genetasio, Director of the Central Institute for the Union Catalogue of Italian Libraries, ICCU. Nasrine Olson, associate professor at the University of Borås and coordinator of the MuseIT project, then presented the project, highlighting its main challenges and key outcomes.
Next, Daniela Bottegoni and Alessia Varricchio from the Museo Tattile Statale Omero shared insights into their institution –an exemplary case of multisensory engagement with cultural heritage. Finally, Claretta Caroppo and Lorenzo Barello illustrated the practices of accessible performances and other inclusive initiatives promoted by the Teatro Stabile di Torino.
A roundtable discussion followed, moderated by Corinne Szteinsznaider from the Michael Culture Association. The panel brought together Cristina Da Milano (Centro Europeo per l’Organizzazione e il Management Culturale, ECCOM), Maddalena Battaggia and Lucia Sardo (Study Group on Inclusion of the Italian Library Association), Gabriella Cetorelli (UNESCO Office – Italian Ministry of Culture), and Dino Angelaccio (National Observatory on the Condition of Persons with Disabilities).
The discussion focused on the policy briefs published by the MuseIT project and subsequently expanded to address European Union policies on the topic, those implemented by the Italian Ministry of Culture and, more broadly, at the national level, as well as specific actions undertaken by the Italian Library Association (AIB) in the field of library accessibility.
Afterwards, the testing phase of the technologies developed within the MuseIT project was launched. Multiple workstations were set up, allowing participants to explore the platform and experiment with various tools, including haptic technologies. Five dedicated workstations were available: WebData, Haptiverse, Haptics, Virtual Museum, and Music Generation. These were managed by project partners with the support of facilitators23.
In parallel with the technology testing phase, attendees –divided into two rotating groups– took part in a workshop focused on the accessibility of cultural heritage. Participants were invited to share their experiences, both positive and negative, and to express their perspectives in a non-judgmental and attentive environment.
This contribution aims to present the main findings that emerged from the workshop.
To break the ice and help participants feel at ease, they were invited to write their name on a post-it notes, introduce themselves, and instinctively choose a word they associated with the day so far. This activity resulted in two meaningful word clouds –each reflecting the responses of one group. While both were composed of positive terms, the differences between the two highlight how the topic can evoke a range of emotions and be interpreted through multiple perspectives.
Figure 4. Word cloud from the first workshop
Participants were then asked to share their negative experiences regarding the accessibility of cultural heritage. One of the first topics to come up –and one of the most passionately discussed– was the use of language and terminology. When addressing the topic of disability, it is essential to use language carefully, as even seemingly neutral terms –such as “inclusion”– may reflect a perspective in which a majority seeks to include a minority. For this reason, many participants suggested that the term “equality” –as stated in Article 3 of the Italian Constitution– would be more appropriate.
A similar reflection was made on the expression “differently abled”, which was criticized for shifting the focus away from the person and for being inherently discriminatory, as it implicitly raises the question: differently abled compared to whom? Words are not neutral, and participants agreed on the importance of critically considering the language used when discussing disability.
Figure 5. Word cloud from the second workshop
Another issue that emerged was the subtle yet pervasive discomfort with disability that can be perceived even in environments where accessibility is ostensibly considered a priority. A particularly telling example was shared by a participant, who recounted how museum staff denied a guided tour for a group of individuals with cognitive disabilities, despite being informed that the visit was part of a pilot project for designing a tailored guide. At the same time, other tours were taking place, and the group was instead asked to follow the exhibition in reverse order.
These paradoxical experiences served as a starting point for a broader discussion on the importance of co-designing services together with persons with disabilities. Unfortunately, this crucial phase –essential for addressing the actual needs of the intended users– is often only partially implemented, covering isolated segments of the process. Participants highlighted a lack of continuity, leading to paradoxical outcomes: for instance, services that require the presence of a companion, thus failing to ensure user autonomy, or architectural solutions intended to improve accessibility that instead create new barriers, such as a door handle that narrows the passageway and ultimately renders a space inaccessible. As a result, some participants suggested placing more emphasis on making artworks accessible, rather than focusing on pathways, sharing a common frustration with the current approaches.
The “exceptionality” of these processes is a sign that sensitivity to the issue is changing, but the effort made is not enough. A cultural shift is needed; we must “be born into multiplicity” and, above all, offer an experience that is complex and fulfilling for everyone equally: an equitable experience, as one participant emphasizes.
To achieve a change in mindset, participants emphasized the need for what one referred to as a “minority exercise” –an intentional effort to place oneself in the position of others in order to better understand how to ensure no one is left behind. Empathy and the experience of exclusion were central themes: some participants shared that their awareness significantly increased after they personally encountered temporary disability due to an accident or illness, which allowed them to experience firsthand the barriers others face daily.
A participant shared a successful professional experience in which, to address complexity, the museum she works for organized different teams for specific disabilities, involving people with disabilities and various professionals. This approach allowed for the consideration of aspects that would otherwise have been difficult to foresee.
Another issue raised by participants was related to technology, which, in line with the previous discussion, is often designed to assist people with disabilities but is left unsupported afterward with no adequate training and awareness for those who are meant to use and manage it. Technology is undoubtedly a valuable tool that has allowed, and continues to allow, both people with disabilities and others to facilitate communication and engage in reading and cultural experiences with a certain degree of autonomy. However, on its own, it is not enough.
Later, during the workshops, participants were invited to share their positive experiences regarding accessibility. After an initial phase in which it seemed almost more difficult to recall and share positive episodes (despite the words chosen at the beginning of the meeting), many stories filled with humanity and emotion emerged.
Participants shared experiences such as the feeling of “being at home” in an exemplary museum where hospitality is genuinely extended to everyone; of theatre subscriptions gladly purchased by people with disabilities when performances are truly accessible; and of sensory experiences like “readings in the dark” designed to foster empathy and perspective-taking. Notably, it was emphasized that feeling part of society often begins with small gestures, for example, something as simple yet meaningful as receiving a wedding invitation written in Braille.
A clear and powerful idea emerged: the right to beauty [and access to culture] does not belong to just a few. If cultural services –and, by extension, all services– are truly designed with everyone in mind, then the richness that stems from diversity benefits everyone. This workshop experience, in enabling participants to speak, understand, and truly listen to one another, ultimately left everyone enriched.
Concluding Remarks
As the MuseIT project approaches its conclusion, this report presents some of the collective efforts that have shaped its course. Bringing together diverse partners, disciplines, and user communities, the project has aimed to explore how technology might support more inclusive and engaging encounters with cultural heritage. The work completed so far suggests promising directions.
MuseIT has provided a space for experimenting with multisensory technologies, participatory methods, and inclusive design principles. Through co-design and dialogue with individuals with a wide range of abilities, the project has aimed to form a better understanding of user perspectives and the barriers people face, creating the opportunity for collaborative efforts to help address them.
We acknowledge that much remains to be done. Nonetheless, we hope that the approaches taken and the insights gained will prove useful to others working in the fields of accessibility, digital heritage, and inclusive technology.
Co-funded by the European Union
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
L’ultima consultazione dei siti web è avvenuta nel mese di dicembre 2025.
Note
- Other authors who contributed to the paper are Thomas Van Erven, Sándor Darányi, Konstantinos Avgerinakis, Georgia Georgiou, Eleftherios Anastasovitis, Spiros Nikolopoulos, Sophia Alexandersson, Nigel Osborne, Kim Ferguson, Nitisha Jain, Andrea Scharnhorst.
- https://www.muse-it.eu/.
- Maud Ntonga — Juliette Pokorny — Nasrine Olson, The MuseIT Project: co-designing inclusive technologies for better access to culture, «DigItalia. Rivista del digitale nei beni culturali», 18 (2023), n. 1, p. 187-190, <https://digitalia.cultura.gov.it/article/view/3009>.
- For a better understanding of the developed prototypes, the training videos available at the following link can be consulted: <https://www.youtube.com/watch?v=BPN835oq5O4&list=PL11mJw4sA9UPMVYQ8CK_LzyP3CscJSCbQ>.
- The HaptiVerse system is a software solution allowing for digital design of haptic devices, haptograms, and construction of basic sentences, aimed to iterative development with Social Haptic Communication.
- The Picture 2 Notes application is a simple JavaScript application, which can produce musical notes and haptic feedback when a user interacts with an image loaded into it.
- MuseIT used Dataverse and created various instances, later in the footnote 13 around the ShareMusic case a reference is given to the Dataverse project as such, feel free to cross-reference; also, a good reference is Mercè Crosas, Cloud Dataverse: A Data Repository Platform for the Cloud, «CIO Review Open Stack», 2017, <https://openstack.cioreview.com/cxoinsight/cloud-dataverse-a-data-repository-platform-for-the-cloud-nid-24199-cid-120.html>.
- FAIR: Findable, Accessible, Interoperable, Reusable: <https://www.go-fair.org/fair-principles/>.
- Haptic Subject Index, or HSI, is a simplified schema of 4 x 8 wind-rose directions to communicate specific subject information about the artwork.
- https://github.com/albertmeronyo/CUBE-MT.
- https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers.
- https://huggingface.co/facebook/fastspeech2-en-ljspeech.
- https://huggingface.co/facebook/musicgen-small.
- https://huggingface.co/tencent/Hunyuan3D-2.
- Eleftherios Anastasovitis — Georgia Georgiou — Eleni Matinopoulou — Spiros Nikolopoulos — Ioannis Kompatsiaris — Manous Roumeliotis, Enhanced Inclusion through Advanced Immersion in Cultural Heritage: A Holistic Framework in Virtual Museology, «Electronics», 13 (2024), n. 7, 1396. <https://doi.org/10.3390/electronics13071396>.
- Share Music & Performing Art is a project partner of MuseIT: <https://www.sharemusic.se/en/home>.
- The Dataverse Project, <https://dataverse.org/> is an open-source research data repository software - developed by the Institute for Quantitative Social Science at the Harvard University and supported and used by many other institutions. Vyacheslav Tykhonov is one of the “ambassadors” of this open software community, and at DANS as part of the <SSHOC.eu> project, a workflow to deploy Dataverse was developed. Marion Wittenberg — Vyacheslav Tykhonov — Eko Indarto — Wilko Steinhoff — Laura Huis in ‘t Veld — Stefan Kasberger — Philipp Conzett — Cesare Concordia — Peter Kiraly — Tomasz Parkoła, D5.5 ‘Archive in a Box’ repository software and proof of concept of centralised installation in the cloud, «Zenodo», 2022, <https://doi.org/10.5281/zenodo.6676391>.
- Knowledge Organisation System is a technical term to describe any structural model to organise knowledge. Examples are thesauri, classification systems, and ontologies. For a definition see <https://www.isko.org/cyclo/>. In the case of the ShareMusic Knowledge Base, the KOS combines standard bibliographic information with specific facets describing the multimodality of some content and ways to access content.
- Moa Johansson —Vyacheslav Tykhonov — Sophia Alexandersson — Kim Ferguson — James Hanlon — Andrea Scharnhorst — Nigel Osborne. A Knowledge Base for Arts and Inclusion: The Dataverse Data Archival Platform as a Knowledge Base Management System Enabling Multimodal Accessibility, in: Human-Centered Design, Operation and Evaluation of Mobile Communications. HCII 2025, ed. By June Wei, George Margetis, (Lecture Notes in Computer Science; 15824), Cham: Springer, 2025, p. 291-309, <https://doi.org/10.1007/978-3-031-93064-5_19>; preprint: <https://arxiv.org/abs/2504.05976>.
- Dataverse at the backend: <https://database.sharemusic.se/>.
- https://digitalheritage2025.unisi.it/.
- https://europe.acm.org/seasonal-schools/accessible-inclusive-technologies.
- At each workstation, after completing the tests, users were asked to answer a set of evaluation questions. The results of this feedback will be published in project deliverable D7.3. MuseIT Pilot Demonstrators 2nd Iterations.
