Display control apparatus and method

A technology of display control and display position, which is applied in the direction of digital output to display equipment, image data processing, voice analysis, etc., and can solve problems such as inability to have multiple attributes

Active Publication Date: 2013-07-03
YAMAHA CORP
9 Cites 10 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, according to the technology disclosed in the relevant patent documents, only one additional piece of information (that is, an ...
View more

Abstract

The invention provides a display control apparatus and method. A control section (10) analyzes sound data to acquire data indicative of a plurality of attributes, such as pitch and volume, and displays, on a display screen (40), graphics indicative of the acquired pitch and volume. At that time, the control section (10) displays, on the display screen, a pitch curve where a value of pitch is represented by the vertical axis while the passage of time is represented by the horizontal axis. Also, at a position on the display screen based on a displayed position of the pitch curve, the control section displays a volume graphic where a level of volume is represented by a length or distance or width, in the vertical-axis direction, of the volume graphic.

Application Domain

Electrophonic musical instruments2D-image generation +4

Technology Topic

GraphicsVertical axis +2

Image

  • Display control apparatus and method
  • Display control apparatus and method
  • Display control apparatus and method

Examples

  • Experimental program(1)

Example Embodiment

[0021]
[0022] figure 1 It is a diagram showing the configuration of a system to which an embodiment of the present invention is applied. The system includes a karaoke device 100, a server device 200, and a network NW. The karaoke apparatus 100 is configured to not only reproduce karaoke music according to the user's request, but also evaluate the user's singing in accordance with the reproduced karaoke music. The karaoke device 100 is an embodiment of the display control device of the present invention. The network NW is a LAN (Local Area Network) or the Internet for data communication between the karaoke device 100 and the server device 200. The server device 200 has a storage section provided internally or externally, such as an HDD (Hard Disk Drive), in which various data such as content data related to karaoke music are stored, and the server device 200 is configured to store the content data according to a user's request Provided to the karaoke device 100. Here, each "content" item includes a combination of audio and video of karaoke music. That is, each item of content data includes so-called accompaniment data and video data. The accompaniment data represents the accompaniment and chorus in the melody of the music composition other than the singing voice, and the video data represents the lyrics of the music composition and the video to be displayed on the background of the lyrics. Note that there may be multiple karaoke devices 100 for one server device 200. In contrast, there may be multiple server devices 200 for one karaoke device 100. Note that the term "sound" as used herein refers to any different types of sounds, such as human voices and musical instruments.
[0023] figure 2 Is shown figure 1 A block diagram of the hardware structure of the karaoke device 100 in the system. As shown in the figure, the karaoke device 100 includes: a control part 10, a storage part 20, an operation part 30, a display part 40, a communication control part 50, a sound processing part 60, a microphone 61 and a speaker 62, and each part communicates with each other via a bus 70 even. The control section 10 includes a CPU (Central Processing Unit), RAM (Random Access Memory), ROM (Read Only Memory), and the like. In the control section 10, the CPU controls each section of the karaoke apparatus 100 by reading a computer program stored in the ROM or storage section 20 and loading the read computer program into the RAM.
[0024] The operation section 30 includes various operators, and outputs operation signals indicating various operations by the user to the control section 10. The display section 40 includes, for example, a liquid crystal panel, and under the control of the control section 10, the display section 40 displays various images corresponding to each karaoke music, such as lyrics and subtitles (telop) and background videos. The communication control part 50 interconnects the karaoke device 100 and the network NM in a wired or wireless manner, and controls data communication between the karaoke device 100 and the server device 200 via the network NM.
[0025] The server device 200 is a computer including a CPU and various memories (not shown in the figure); in particular, the server device 200 includes a network storage 210. The network storage 210 is, for example, a hard disk drive (HDD) in which various data such as content data of karaoke music are stored. in spite of figure 2 The server device 200 shown includes one network storage 210, but the number of network storages 210 is not limited to this, and the server device 200 may include a plurality of network storages 210. In the case where the content data of the karaoke music book reserved by the user is stored in the network storage 210 in advance, the karaoke device 100 communicates with the server device 200 under the control of the communication control section 50 to perform streaming reproduction, wherein the downloading is performed through the network NW When the content data is read from the network storage 210, the karaoke device 100 sequentially reproduces the downloaded part of the content data.
[0026] The microphone 61 outputs an audio signal representing the picked-up voice to the sound processing section 60. The sound processing section 60 includes an A/D (Analog Digital) converter for converting analog audio signals into digital sound data and outputting the digital sound data to the control section 10 so that the control section 10 receives the digital sound data. The sound processing section 60 also includes a D/A (digital to analog) converter for converting the digital sound data received from the control section 10 into an analog audio signal, and outputting the converted analog audio signal to the speaker 62, so that the speaker 62 The sound is audibly output based on the analog audio signal received from the sound processing section 60. Note that although the description in this embodiment is for the case where the microphone 61 and the speaker 62 are included in the karaoke apparatus 100, it is also possible to provide only input and output terminals in the sound processing section 60, and not in the karaoke apparatus 100. A microphone 61 and a speaker 62 are included; in this case, an external microphone may be connected to the input terminal of the sound processing section 60 through an audio cable, and an external speaker may be connected to the output terminal of the sound processing section 60 through an audio cable. In addition, although the description has been made in this embodiment for the case where the audio signal output from the microphone 61 to the speaker 62 is an analog audio signal, a digital audio signal may also be output and input. In this case, the sound processing section 60 is not required to perform A/D conversion and D/A conversion. Similarly, the operation part 30 and the display part 40 may also include respective external output terminals for connecting them to an external monitor.
[0027] The storage section 20 is a storage device for storing various data therein, such as a hard disk drive or a nonvolatile memory. The storage section 20 includes a plurality of storage areas, such as an accompaniment data storage area 21, a video data storage area 22, a guide melody (hereinafter referred to as “GM”) data storage area 23, and a user singing voice data storage area 25.
[0028] The accompaniment data storage area 21 has previously stored information related to accompaniment data representing accompaniment sounds of various music pieces. Each accompaniment data (such as a data file in the MIDI (Musical Instrument Digital Interface) format) is assigned music-related information, such as a music number that uniquely identifies the music of interest and a music name that represents the name of the music. In the video data storage area 22, lyric data representing the lyrics of various music pieces and background video data representing a background video to be displayed on the background of the lyrics are stored in advance. When singing karaoke, the lyrics indicated by the lyrics data are displayed as lyrics subtitles on the display section 40 as the music progresses. When singing karaoke, the background image represented by the background video data is displayed as the background of the lyrics and subtitles on the display section 40 as the music progresses. The GM data storage area 23 prestores data representing the melody of the human voice part of the music, that is, guide melody data (hereinafter referred to as “GM data”) as data specifying the constituent notes to be sung. That is, for example, GM data described in the MIDI format represents the pitch of the model sound. Such GM data is used by the control section 10 as a comparison standard or reference in evaluating the user's singing skills or musical performance. The evaluation processing performed by the control section 10 will be described in detail later.
[0029] The user singing voice data storage area 25 stores voice data for each music piece sung by karaoke, which is generated by the voice processing section 60 converting the user singing voice picked up by the microphone 61 during reproduction of the corresponding accompaniment data into digital data. These sound data, which are referred to as “user singing voice data” hereinafter, are stored as data files of, for example, WAVE format (RIFF Waveform Audio Format). The user singing voice data of each music piece is associated with the GM data of the music piece by the control section 10.
[0030] image 3 It is a block diagram showing a functional configuration example of the karaoke apparatus 100. in image 3 , The reproduction section 11 and the scoring section 12 are realized by the CPU of the control section 10 reading the computer program stored in the ROM or the storage section 20 in advance and loading the read computer program into the RAM. The reproducing section 11 reproduces karaoke music. Specifically, the reproduction section 11 not only audibly outputs sound through the speaker 62 based on accompaniment data and GM data, but also displays video through the display section 40 based on video data.
[0031] The scoring section 12 evaluates voice data (user singing voice data) representing the singing voice of the user (singer). That is, the scoring section 12 evaluates the singing performance of the user based on the difference between the pitch of the singing voice and the pitch of the GM data.
[0032] Figure 4 It is a block diagram showing an example of the functional configuration of the scoring section 12. in Figure 4 Here, the analysis section (attribute data acquisition section) 121 analyzes user singing voice data for two or more voice (sound) attributes, and outputs attribute data representing the analyzed attributes. In this embodiment, pitch and volume are used as speech attributes (ie, the first and second attributes, respectively). The analysis section (attribute data acquisition section) 121 includes a pitch acquisition section 121a and a volume acquisition section 121b. The pitch acquisition part 121a analyzes the user's singing voice data stored in the user's singing voice data storage area 25 to detect the pitch of the singing voice. The pitch acquisition section 121a outputs data representing the detected pitch (hereinafter referred to as “pitch data”). The volume acquisition section 121b detects the volume of the user's singing voice data stored in the user's singing voice data storage area 25. The volume acquisition section 121b outputs data representing the detected volume (hereinafter referred to as “volume data”).
[0033] The comparison section 122 compares the pitch of the user's singing voice data acquired by the pitch acquisition section 121a with the pitch of the GM data, and performs a scoring process on the user's singing voice data based on the difference in the compared pitches. More specifically, for example, the comparison section 122 compares the pitch change of the voice represented by the user's singing voice data with the pitch change of the guide melody represented by the GM data, and then calculates the agreement between the compared pitch changes. The evaluation value of the degree. For example, for a given note, if the pitch difference falls within a predetermined allowable range, the evaluation value can be calculated as 100% (meaning there is no defect or penalty point), or if the pitch difference does not fall The length of the time period within the predetermined allowable range is half the sound length of the notes in the GM data, and the evaluation value can be calculated as 50%. That is, the evaluation value of the note is calculated by dividing the length of the time period during which the pitch difference falls within a predetermined allowable range by the sound length of the note in the GM data. The control section 10 determines a deduction point based on the calculated evaluation value. For example, in a case where "two points" is assigned to a given note in advance and the evaluation value is calculated as 50%, the control section 10 determines "1 point" as the deduction point. Alternatively, the comparison section 122 may also perform the scoring process in consideration of the volume of the user's singing voice data acquired by the volume acquisition section 121b.
[0034] The display control section 123 displays the scoring result output by the comparison section 122 on the display section 40. The display control section 123 not only serves as a receiving section that receives the attribute data output from the analysis section 121, but also serves as a display control section for displaying two or more voice attributes represented by the received attribute data on the display section 40 The value of changes over time (changes over time). The display control processing performed by the display control section 123 is described below with reference to the drawings.
[0035] Figure 5 It is a diagram showing an example of a screen displayed on the display section 40 under the control of the display control section 123. in Figure 5 In the screen, the horizontal axis represents the passage of time, and the vertical axis represents the pitch. The screen is configured to have a first axis (horizontal axis) representing the passage of time and a second axis representing the pitch intersecting the first axis Display related to attribute data on the two-axis coordinate plane (vertical axis). Each solid line 300 in the figure represents the pitch change of the user's singing voice data of one note over time, and will be referred to as a "pitch curve 300" hereinafter. That is, the display control section 123 indicates the value of the pitch (first attribute) acquired by the pitch acquisition section 121a using the coordinate position along the second axis (pitch axis). The change of the indicated pitch value over time constitutes a pitch curve 300. As will be described later, the pitch curve 300 serves as a reference for displaying the volume value (second attribute) at each time point on the display section 40. In this embodiment, the vertical axis (second axis) is used as both the pitch axis and the volume axis. That is, the value of pitch (first attribute) is represented by an absolute value along the vertical axis (second axis), and the value of volume (second attribute) is represented by a relative value along the vertical axis (second axis), namely It is represented by a relative value based on the coordinate position corresponding to the volume.
[0036] In addition, by the length extending in the vertical axis direction (ie, the direction of the second axis) from the coordinate position of the pitch curve 300 at each time point, the display control section 123 displays the volume (second The value of the attribute) is the volume graph (first graph) 500. In this case, the display control part 123 displays the volume graphic 500 by the relative value of the pitch curve 300 in a manner that the absolute coordinate position of the pitch curve 300 in the vertical axis direction is used as the center coordinate position of the volume graphic 500. As an example, the volume graph (first graph) 500 is displayed in a manner that the positive and negative envelopes of the analog volume amplitude waveform swing in the positive and negative directions of the amplitude center. That is, the volume graph 500 has a vertically symmetrical shape with respect to the pitch curve 300, and a larger amplitude in the vertical axis direction of the volume graph 500 indicates a greater volume. Note that in Figure 5 In the example shown, the pitch curve 300 is not only displayed as a reference, but also used as a visual curve graph (second graph) to visually represent the value of the pitch (first attribute).
[0037] In addition, in Figure 5 , Each band-shaped figure 400 represents the pitch of GM data (model attribute data), and will be referred to as “GM figure 400” hereinafter. GM data (model attribute data) represents the model value of pitch (first attribute). The display control section 123 also serves as a receiving section that receives GM data (model attribute data). The display control section 123 displays the GM graphic 400 represented by the received GM data at positions (coordinates) along the vertical and horizontal axes. In addition, the display control section 123 Figure 5 A letter graphic representing the lyrics 600 (hereinafter referred to as "lyric graphic 600") is displayed near the corresponding GM graphic 400 on the screen of, as related information. Alternatively, the display control part 123 may display the lyrics graphic 600 overlapped or superimposed into the GM graphic 400.
[0038]
[0039] Image 6 To show a flowchart of an example operation sequence executed by the control section 10. Once the music piece selected by the user is reserved by the user via the operating section 30 (affirmative determination in step S100), in step S102, the control section 10 searches the storage section 20 for the reserved music piece. More specifically, in step S102, using the music number or name of the selected music as a search key, the control section 10 searches for music related items in the accompaniment data storage area 21, the video data storage area 22, and the GM data storage area 23 , And read the data found by searching (searched data) into RAM.
[0040] Then, in step S104, the control section 10 reproduces the karaoke music based on the aforementioned accompaniment data, video data, and GM data stored in the RAM. More specifically, in step S104, the control section 10 not only audibly reproduces the sound through the speaker 62 based on the accompaniment data and the GM data, but also displays the video based on the video data on the display section 40. Then, in step S106, the control section 10 stores the user singing voice data generated by the voice processing section 60 converting the user singing voice picked up by the microphone 61 into digital data in the user singing voice data storage area 25. Then in step S108, after the reproduction of the karaoke music is completed, the control section 10 scores the user's singing based on the user's singing voice data and the GM data stored in the user's singing voice data storage area 25. Then in step S110, the control section displays the user's singing score result on the display section 40.
[0041] In step S110, the control section 10 displays letters/characters and images representing the scoring result on the display section 40, and Figure 5 The analysis result of the singing voice shown. in Figure 5 On the screen shown, a common time axis is used to display multiple attributes (pitch and volume) at the same time as the analysis result of the singing voice. Since the volume is represented by the display width of the volume graph 500 displayed superimposed on the pitch curve 300, the user can easily and intuitively grasp both the volume and the pitch by following the pitch curve 300 with his eyes.
[0042]
[0043] The above-mentioned embodiment can be variously modified as follows, and these modifications can be implemented in combination as needed.
[0044]
[0045] Although the preferred embodiment has been described above for the case where the sound attributes analyzed by the control section 10 are volume and pitch, the attributes of the voice (sound) analyzed by the control section 10 may be any other attributes other than volume and pitch. Such as clear pronunciation or clarity and sound generation timing (voice timing), as long as the speech attribute can represent the characteristics or attributes of the speech. In the case of using the intelligibility of the voice as the attribute to be analyzed, for example, the control section 10 may use FFT (Fast Fourier Transform) technology to detect the frequency spectrum from the voice, and based on the level change at the position where the formant appears ( That is, the ratio between the formant level) and the level at the position where the level change has a trough (the trough level) is used to calculate the sharpness. More specifically, the control section 10 may perform sharpness calculation processing, for example, so that the greater the ratio of the formant level to the trough level, the higher the sharpness. In addition, in the case where sound generation timing or utterance timing is used as the attribute to be analyzed, the control section 10 may detect the utterance timing of each lyric phoneme (note) from the user singing voice data and display a graph in which, The larger the difference between the detected utterance timing and the model utterance timing (represented by GM data), the larger the display width in the vertical axis direction.
[0046]
[0047] Although in this embodiment the control part 10 is described as displaying the pitch curve 300, the GM graphic 400, and the volume graphic 500 superimposed on each other on the display part 40, the present invention is not limited to this, and the control part 10 may also display superimposed on each other. The pitch curve 300 and the volume graph 500 are displayed instead of the GM graph 400, such as Figure 7 Illustratively shown. As another alternative, the control part 10 may display only the volume graph 500 without displaying the pitch curve 300 and the GM graph 400. In addition, although the control section 10 in the above embodiment is described as displaying a lyric graphic 600 representing the lyrics in addition to the pitch curve 300, the GM graphic 400, and the volume graphic 500, such as Figure 5 As shown, but the control section 10 may be configured not to display the lyrics graphics.
[0048]
[0049] In the above-mentioned embodiment, the analysis part 121 is provided as the attribute data acquisition part in the control part 10 for analyzing user singing voice data to generate attribute data representing the attributes of the singing voice, and thereby acquiring the attribute data. However, the present invention is not limited to this, and the attribute data acquisition section in the control section 10 may be configured to acquire or receive attribute data from a server device or the like connected via a communication network, instead of generating the attribute data by the control section 10 through user singing data analysis. .
[0050]
[0051] In addition, in the above-described embodiment, the control section 10 is configured to display the volume graph 500 vertically symmetrical with respect to the pitch curve 300, that is, the volume graph 500 has an upward and downward vertical width at the center of the pitch curve 300. However, the display of the volume graphic 500 is not limited to this, and the control part 10 may only display the volume graphic 500 above the pitch curve 300, such as Figure 8 Shown. in Figure 8 In the example shown, the volume level is indicated by the vertical width of the volume graphic 500. Therefore, as in the foregoing embodiment, a larger vertical width in the volume graphic 500 indicates a larger volume level. That is, the control section 10 can display the attributes that represent and serve as the display reference (i.e., the reference attribute (in this case) by positioning the coordinate position of the reference attribute at the center or upper end or lower end in the axial direction of a graph representing other attributes. In the embodiment, it is a graph of another attribute with different pitch)). As another alternative, the pitch curve 300 and the volume graphic 500 may be displayed at a predetermined distance from each other in a manner in which the display positions of the pitch curve 300 and the volume graphic 500 are offset by a predetermined distance in the vertical axis direction. The control section 10 may be configured in any desired manner as long as it displays the value of the first attribute by the coordinates along the second axis that intersects the first axis indicating the passage of time, and by reading the value from the second axis in the second axis. The length of the extension of the coordinate position (that is, the coordinate position of the pitch curve 300 in the foregoing embodiment) is used to display a graph representing the value of the second attribute.
[0052] In addition, although the foregoing embodiment has been described for the case of displaying two types of attributes (ie, volume and pitch), the number of attributes to be displayed is not limited to two, and may be more than two. For example, such as Picture 9 As shown, the control section 10 can display three types of attributes such as volume, pitch, and clarity using a common time axis. in Picture 9 In the image shown, the pitch curve 300 is similar to that in the above embodiment, the volume graph 500 is displayed above the pitch curve 300 in the vertical axis direction, and the width of the volume graph 500 in the vertical direction represents the volume level. level. In addition, in Picture 9 Here, the intelligibility graph 700 represents the intelligibility of the speech as a third graph representing the value of the third attribute; the intelligibility is represented by the width of the intelligibility graph 700 in the vertical direction. The greater width of the sharpness graphic 700 in the vertical direction indicates higher sharpness. Taking the pitch curve 300 as a reference, the sharpness graph 700 is displayed below the pitch curve 300. In addition, in a case where the number of attributes to be displayed is three or more, any two of the three or more attributes may be displayed above and below the pitch curve 300, and the three The other one of the or more attributes can be displayed at a predetermined distance from the pitch curve.
[0053] In addition, although the above preferred embodiment uses the pitch curve 300 as a reference position (ie, based on the position of the pitch curve 300) to display the volume graph 500, the reference attribute is not limited to the pitch, and may be any other appropriate attribute of speech. For example, a volume curve representing time on a horizontal axis and volume on a vertical axis may be used as a reference position, and a pitch graph representing pitch by a display width in the vertical axis direction may be displayed overlapping the volume curve. In short, the control section 100 only needs to display the reference figure at a position on the coordinate plane representing the passage of time by the horizontal axis and the first attribute by the vertical axis, and display the reference figure by the vertical axis at a position corresponding to the coordinate position of the reference figure. The length in the direction represents the graph of the value of the second attribute.
[0054] In addition, the above-described preferred embodiment is described as using a volume graph that expresses the value of the volume (second attribute) by the length in the vertical axis direction. As a modified example, the volume graph may be replaced by a volume graph that expresses the value of the volume (second attribute) by color (hue, color depth, etc.). In this case, the control section 10 may display the volume graphic 500 in a manner in which the color of the graphic 500 becomes darker as the volume increases, and lighter as the volume decreases. Alternatively, the control section 10 may display the volume graphic 500 in a manner in which the color of the graphic 500 becomes more reddish as the volume increases (by increasing the brightness of the red element while reducing the brightness of other color elements) , And become more bluish as the volume decreases (by increasing the brightness of blue elements while reducing the brightness of other color elements). In this case (that is, the same as in the above-described embodiment), the control section 10 displays a figure at a position corresponding to the coordinates of the pitch curve 300. In addition, in this case, the respective volume patterns 500 may be the same or different in shape and size, that is, the respective volume patterns 500 may be the same or different in shape and size. That is, in a modified example, the display volume graph that expresses the value of the volume (second attribute) by the length along the vertical axis may be the same as the display volume graph that expresses the value of the volume by color (hue, color depth, etc.) That is, the volume graph can be displayed through a combination of shape and size changes and color changes.
[0055] In addition, although the above preferred embodiment is described as using the horizontal axis as the time axis and the vertical axis as the pitch axis to display the pitch curve 300 and the volume graph 500, the present invention is not limited to this, and the horizontal axis and vertical axis can be set in any other manner. The axis, as long as the control section 10 can display the value of the reference attribute with a display position in one axis and the time elapsed by the time axis to display the graph.
[0056]
[0057] In the above-described preferred embodiment, the control section 10 is configured to store user singing voice data in the user singing voice data storage area 25, and perform analysis result display processing when the user's singing is terminated. However, the present invention is not limited to this, and the control section 10 may perform analysis result display processing in real time during the user's singing.
[0058] In addition, in the above-described preferred embodiment, the control section 10 is not only configured to reproduce karaoke music and record the user’s singing voice, but may also be configured to analyze the user’s singing voice data to display the analyzed result when the reproduction of the karaoke music is terminated (ie, Results of user singing data analysis). However, the present invention is not limited to this, and the control section 10 may be configured to perform analysis result display processing on previously recorded user singing voice data (ie, voice data previously stored in the storage section 20), and then perform analysis result display processing.
[0059] In addition, although the control section 10 in the above-described preferred embodiment is configured to compare the pitch of the singing voice and the pitch of the GM data and perform evaluation processing based on the comparison result, the evaluation processing can also be performed in any other desired manner. For example, the control section 10 can use any conventionally known scheme (such as frequency analysis or volume analysis using FFT, etc.) to calculate an evaluation value (ie, evaluation result) for a given evaluation item.
[0060] In addition, although the control section 10 is configured to analyze the singing voice of the user (singer) in the above-described embodiment, the control section 10 can also analyze and evaluate the performance sound produced by the user playing a musical instrument instead of the user's singing voice. That is, as described above, the term "sound" as used herein refers to any type of sound, such as human voice and performance sound produced by musical instruments.
[0061]
[0062] As another modification example, two or more devices connected to each other via a communication network can share the functions of the karaoke device 100 of the preferred embodiment of the present invention, so a system including these devices can implement the karaoke device 100. For example, a computer device including a microphone, a speaker, a display device, an operating section, etc., and a server device that performs sound analysis processing can be connected to each other via a communication network to form a system. In this case, the computer device can convert each sound picked up by the microphone into an audio signal and send the audio signal to the server device, and the server device can analyze the received audio signal and send the analysis result to the computer device .
[0063]
[0064] In addition, although the above embodiment has described the case where the display control device of the present invention is applied to a karaoke device that not only reproduces karaoke accompaniment but also scores singing voice, the display control device of the present invention can also be applied to other than karaoke devices. Any other device, as long as the other device is a device that analyzes sound data and displays the analysis result of the sound data. That is, the display control device of the present invention can be applied to various types of devices, such as devices that display the results of voice analysis, devices that perform voice synthesis and editing, and devices that support language learning functions. In the case where the display control device of the present invention is applied to a sound editing device, for example, by simultaneously displaying multiple sound attributes using a common time axis, the user can intuitively grasp the multiple sound attributes, thus facilitating sound synthesis and editing.
[0065] In addition, although the above embodiment has described the use of GM data as the model attribute data representing the model sound attributes, data other than GM data may also be used as the model attribute data. For example, in the case of applying the display control device of the present invention to a sound editing device, data obtained by rounding the analysis result to the 12th scale can be used as the model attribute data. In this case, similar to the above-mentioned preferred embodiment, the control section 10 displays a graph representing the analyzed attribute and a graph representing model attribute data, such as Figure 5 Shown. In short, the model attribute data can be any data as long as it represents the attribute of the model sound.
[0066]
[0067] The present invention can be realized not only as a display control device, but also as a method for realizing such a display control device and a program for making a computer realize a display control function. The program can be provided in a storage medium (such as an optical disc) in which the program is stored, or can be downloaded and installed to a computer via the Internet or the like.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products