Close your eyes and imagine: what’s the future like?

After outlining the actors and affected people and sectors within a possible new direction in the creative process, let’s try to visualize what the current state of things look like and what what it could become after the introduction of a new methodology.

Before

  • involvement of AI in all stages of storytelling generation/ideation/creation
  • no precise and documented knowledge on how to do it in this precice field
  • the prompts to ai tool not well written, without puropose
  • Too many requests = extreme waste and environmental impact
  • Unethical use of AI

After

  • easy to use / step by step methodology to understand and use a correct workflow in the creation process
  • Aware and ethical use of AI tools
  • Less prompts / requests sent with more efficient answers
  • Open a new dialogue
  • Set a standard or starting point for the field

Right now, most creatives are figuring this new directions out alone. A structured methodology could change that and optimize our workflow without making us feel left out of the process. It’s not just about better prompts, yet it’s about reflecting on how an entire field relates to a tool that is now part of our everyday life.

Who is involved?

Trying to make sense of how AI fits into the creative process meant also looking first of all at who is actually involved in this process and who could be potentially affected by something new in this field. The system map I put together tries to capture the messy, layered connections that a new methodology for AI-assisted storytelling would actually have to deal with. What surprised me most has been how quickly the actors in the circles can expand and increase. The part I keep coming back to is where genuine human intentionality sits in all of this. When so many actors are pulling in different directions, tracing where creative agency actually lives feels less like a technical problem and more and more a human one.

D&R2 SED – Inclusion & Accessibility 3/3

On a functional and sensory level, creating inclusion requires identifying exactly where the barriers are. For a player to be truly included, the design must look beyond just visuals and incorporate haptic feedback/vibration and audio cues to support those with different sensory abilities. As seen in the exploration of colorblindness, a major barrier is often a reliance on a single channel of information (like color) to communicate vital game data.

Identifying these barriers makes it evident that participation in a game world is shaped by how much information is “translated” across different senses. The “Who and How” of inclusion depends on moving away from a “one-size-fits-all” visual approach and instead providing multi-sensory tools—like vibration and sound—that allow players to navigate the game regardless of their visual or physical constraints.

D&R2 SED – Change and Impact 2/3

In the “before” scenario, Game UI is often static, cluttered, and not completely accessible. Design decisions are frequently made without a standardized framework, resulting in interfaces that annoy or confuse players of differing experience levels. These “standard” UIs look “bad” in the sense that they fail to meet the functional needs of everyone, leading to a disconnected and frustrating user experience.

In contrast, the “after” scenario highlights a shift toward a more scientific and flexible approach. By adapting and checking designs against WCAG (Web Content Accessibility Guidelines), the UI becomes a tool for education as well as interaction. The impact is a “Modular and Flexible” UI that uses minimal requirement guidelines to reduce clutter. This shift ensures the player is no longer fighting the interface but is instead empowered by a system that adapts to their specific level of experience and physical needs.

D&R2 SED – System Map 1/3

At the center of the system is the Game UI Modular Builder. This is the core tool or framework designed to bridge the gap between static design and player needs. Surrounding this center are the primary creators—UI/UX Designers and Game Designers—who directly build and implement the interface elements. The next layer includes the active users: Gamers, Students, and Visually Impaired Players, whose diverse needs and feedback loops drive the modularity of the system. On the outermost edge, the Game Company acts as the broader stakeholder, providing the professional and institutional context that allows the system to exist.

The map highlights the “stitched” connections between these groups, showing that a modular builder is not just a piece of software, but a meeting point for professional design and lived user experience. It reveals that accessible UI is a collaborative ecosystem where the designer’s tools must directly respond to the player’s specific barriers.

The impact of generative models on data driven narratives. A quick overview

After starting with a giant question mark about the role played by AI nowadays in the field of data visualization, I decided to start with a literature review to narrow down and frame better on which topic/s to focus. So I have decided to investigate better how LLMs and generative models intervene in data driven storytelling (meant as turning data into easy-to-read and easy-to-understand stories that help turning insights into action).

A systematic review regarding telling stories with data deeply guided my curiosity and focus on the topic of human intentionality. In addition, many papers were also questioning the modern role of the author in the creative process. I want to understand better where the author exactly stands today and how empathy and human intention, that are not entirely replicable by machines, fit in a world dominated by algorithmic storytelling. Therefore, I am exploring the co construction of meaning to observe how humans and algorithms might merge to build these new narratives.

As my literature review expanded, several critical points have emerged. I’d like to explore how algorithm suggestions might perpetuate a structural bias compared to a possible unintentionally “human manipulation”. It is also crucial to question whether relying on automated micro narratives is always the right choice when considering a diverse user base. To conclude (as if this was not already enough qeustions) I plan to explore the sociological aspect of the epistemic control, understanding to whom may it belong.

But for now, let’s keep the focus on just the creative process adopted to create a narrative starting from data and see how the data community behave when involving AI!

Inclusion and Accessibility

The third step in analising my project’s users was inclusion and accessibility. This happened in two phases, where I asked myself some questions.

For whom is the experience accessible? What is needed for the full experience?

  • A mobile phone with GPS
  • An Internet connection
  • Vision, actually also a good eyesight
  • Basic knowledge about light pollution

What are the barriers? How do we make the product accessible?

  • Vision: for blind people, there could be an audio guide working with GPS that describes what is visible in the sky; the phone camera pointed at the sky could help reporting data about light pollution;
  • Myopia: the experience should be AR, so if someone does not see well from afar, they can still see the stars on their phone thanks to the camera; the phone camera could help reporting data about light pollution;
  • Hyperopia: When using AR and reading words on the screen, there should be an option to zoom in; text size changes and speech-to-text features should be supported;
  • Internet connection: there should be an option to save a report even when you are offline, then the data is sent as soon as Internet connection is reestablished; written feedback and progress about this should be shown;
  • Basic knowledge about light pollution: the app could have brief explanations or information buttons beside some sections, as well as explanations about the issue and its importance in general;