Questions

<aside> 🧟 Post your two questions below!

</aside>

  1. When writing the notation for the pitch of the notes, is it recommended to stick to writing the pitch of the notes in one form of notation or could I possibly switch between notations for example Scientific Pitch Notation and Hertz in the middle of my code? (John Kim)

tone.js doesn't care if you use multiple notations, but it will probably confuse the humans that read your code (like you)

  1. Is there a way to hard set the time signature of your melody in tone.js, or when writing my melody would I just have to keep the number of beats and the subdivision of the notes in mind? (John Kim)

from memory (so maybe wrong) you can set the tempo, and you might be able to set a time signature that will change how the duration notation is interpreted

however, time signature is subtle: [<https://music.stackexchange.com/questions/568/is-there-any-real-world-difference-between-time-signatures-such-as-4-4-and-8-8>](<https://music.stackexchange.com/questions/568/is-there-any-real-world-difference-between-time-signatures-such-as-4-4-and-8-8>)

(1) Which type of computational music generally sounds the most natural or “human-made”? Algorithmic, aleatoric, or generative? (Brian Lau)

pass

(2) How can you use music (either computational generated or already existing) as an input that is “translated” into digital 2D art? I assume you would have to remap the notes somehow? (Brian Lau)

this sort of thing is sometimes called "transcoding" or "visualizing" or other things depending on how you approach it. Really you could make up infinite ways to map the data from music to 2d art.

1.) Something that I noticed when playing with Jake Albaugh’s arpeggiator is that there would be moments where it felt like there was a “tension” that should be broken. Is that something that just happens and are there ways to relieve it? (Alex Silva)

this reminds me of content in "Melody in Songwriting" by Jack Perricone, which i highly recommend I'm not really that strong at music theory, so i think you are better off with Jack

2.) The Target Characteristics that you provided in the example produced a constrained set of results that sound pretty harmonious and “good”. How can we learn to come up with our own constraints that produce “good” results as well? (Alex Silva)

many of those rules were adapted from ideas i learned about from "Melody in Songwriting"

How is music generated by an AI platform like Riffusion different from generative music? (Isha)