annual report - nhk.or.jp · prototyped an active-matrix-driven spin-slm using a tunnel...

60
NHK Science & Technology Research Laboratories 2017 Annual Report Annual Report Nippon Hoso Kyokai [Japan Broadcasting Corporation]

Upload: vodang

Post on 21-Aug-2019

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHKScience & TechnologyResearch Laboratories

2017

Annual Report

Annual Report

Nippon Hoso Kyokai [Japan Broadcasting Corporation]

Page 2: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

Table of Contents

■1 8K Super Hi-Vision ……………………………4

1.1 Video systems 5

1.2 Cameras 5

1.3 Displays 6

1.4 Recording systems 7

1.5 Sound systems providing a strong sense of presence 8

1.6 Video coding 9

1.7 Media transport technologies 10

1.8 Satellite broadcasting technology 11

1.9 Terrestrial broadcasting transmission technology 12

1.10 Wireless transmission technology for program contributions (FPU) 14

1.11 Wired transmission technology 16

1.12 Domestic standardization of broadcasting systems 17

■2  Three-dimensional imaging technology …………………………………18

2.1 Integral 3D imaging technology 19

2.2 Three-dimensional imaging devices 21

2.3 Multidimensional image representation technology 22

■3  Internet technology for future broadcast services ………………………24

3.1 Content provision platform 24

3.2 Service linkage technologies 27

3.3 Security technologies 29

■4  Technologies for advanced content production …………………………………31

4.1 Content elements extraction technology 31

4.2 Content production support technology 32

4.3 Smart Production Lab 33

4.4 Wireless cameras 34

4.5 Technological development of ultra-directional microphone 34

■5  User-friendly broadcasting technologies ………………………………35

5.1 Information presentation technology 35

5.2 Speech recognition technology 36

5.3 Audio description technology 37

5.4 Language processing technology 39

5.5 Image cognition analysis 39

■6  Devices and materials for next-generation broadcasting …………41

6.1 Advanced image sensors 41

6.2 Advanced storage technologies 43

6.3 Next-generation display technologies 45

■7 Research-related work ……………………47

7.1 Joint activities with other organizations 47

7.2 Publication of research results 50

7.3 Applications of research results 54

Greetings …………………………………… 1

NHK Science & Technology Research Laboratories Outline ……………………56

Accomplishments in FY 2017 ………… 2

Page 3: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 1

Greet ings

Toru KURODADirector of NHK Science & Technology Research Laboratories

NHK Science & Technology Research Laboratories (STRL), the sole research facility in Japan specializing in broadcasting technology and part of the public

broadcaster NHK, is working to create a rich broadcasting culture through its world-leading R&D on broadcasting technologies.

Fiscal year 2017 produced steady progress in our preparations for the start of new 4K/8K satellite broadcasting scheduled for December 2018, including infrastructural development, exemplifi ed by the successful launch of the broadcasting satellite BSAT-4a in September. We will continue to move forward with our efforts toward the widespread use of the service in 2020 while promoting the development of video production and transmission equipment for “full-featured 8K,” which is the ultimate 8K format, and research on technologies for the easy viewing of 8K broadcasting at home.

In February 2018, we utilized “automated sport commentaries” produced by speech synthesis for live sports coverage for the fi rst time in the world and provided the service on NHK Online and Hybridcast. Our technology for automatically summarizing program video by using image analysis also performed well in program production. We will put further effort into R&D on program production assistance technology using artificial intelligence (AI) including universal service technologies to make broadcasts available to as many people as possible, as well as image, audio and language analysis technologies.

NHK is taking on the challenge of transforming itself from a public broadcaster to “public media” appropriate for the era of convergence of broadcasting and telecommunications. NHK STRL will also work actively on research into a technology for exploring new possibilities of TV through the use of the internet, 3D television that reproduces natural 3D images and next-generation broadcasting devices.

This annual report summarizes our research results in FY 2017. It is my hope that this report will help you better understand NHK STRL’s research and development activities and enable us to build collaborative relationships that promote research and development. I also hope it will help you utilize the results of our efforts.

Finally, I would like to express my sincere gratitude for your support and look forward to your continued cooperation in the future.

May, 2018

Page 4: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

2 | NHK STRL ANNUAL REPORT 2017

Accomplishments in FY 2017

8K Super Hi-VisionNHK STRL is researching a wide range of technologies for Super Hi-Vision (SHV),

focusing on program production equipment that supports a 120-Hz frame rate, which is the ultimate format of SHV. In our work on cameras and recording systems, we developed a high-speed SHV camera equipped with an image sensor capable of capture with a 240-Hz (up to 480-Hz) frame rate and a slow-motion player that can record video at 240 Hz while simultaneously reproducing it at 60 Hz for broadcasting high-defi nition slow-motion images in sports programs. In our work on SHV displays, we increased the luminance of our 8K sheet-type display consisting of four 4K organic light-emitting diode panels and implemented support for a 120-Hz frame frequency. We also developed a high-luminance 8K HDR liquid crystal display with a peak luminance of 3,500 cd/m2, more than three times that of conventional displays. For transmission technologies, we designed the details of a preliminary standard for future SHV terrestrial broadcasting and made preparations for experimental stations to verify the standard. We also researched a 21-GHz-band satellite transmission scheme and an IP multicast delivery technology using MMT.

→See p. 4 for details.

Three-dimensional imaging technologyWith the goal of developing a new form of broadcasting delivering a strong

sense of presence, we aim to develop a more natural and viewable 3D television using spatial imaging technology that does not require the viewer to wear special glasses. To this end, we made progress in our research on the integral 3D method and the holographic method. In our research on integral 3D display technologies, we developed direct-view display equipment using a high-density 8K OLED display with a resolution in excess of 1,000 ppi. Regarding the holographic method, we prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of 2D images. We also researched multidimensional image representation technology to realize new image presentation in live sports and other programs. In particular, the outcomes of our research on the “sword tracer” system, which tracks an object at high speed using a near-infrared camera, and “Sports 4D Motion”, which combines multiview video and CGs, were utilized for program production.

→See p. 18 for details.

Internet technology for future broadcast servicesWe continued researching technologies for utilizing the internet to provide

“public media” adapted to diverse viewer environments. In response to the diversification of distribution media and viewing terminals of TV programs, we studied a media-unifying platform that automatically selects appropriate media and distribution sources according to the user situation and confirmed its effectiveness through implementation on smartphones and user evaluation experiments. As video distribution technologies, we developed a streaming technology that allows smooth switching between different viewpoints of multi-viewpoint camera images through the internet and a server-side rendering technology that enables the TV viewing of 360-degree images. We investigated the use of Hybridcast Connect X to realize hybrid services on Hybridcast-enabled TV. We developed and exhibited service instances in cooperation with commercial broadcasters and manufacturers and also promoted standardization at the IPTV Forum Japan.

→See p. 24 for details.

Technologies for advanced content productionWe progressed with our R&D on program production technologies using

artifi cial intelligence (AI) and a big-data analysis technology for producing high-quality attractive content services, wireless transmission technologies for program contributions such as live sports coverage and music programs, and audio technologies to pick up sports sound clearly. We also utilized Smart Production Lab, which we built in our laboratory in FY 2016, to promote our efforts to strengthen collaborations within NHK. This led to concrete results such as the use of our research outcomes of program production technologies for actual program production. In our research on wireless technologies for transmitting program contributions, we prototyped an SHV wireless camera using the millimeter-wave (42-GHz) band and verified transmission systems. For research on audio technologies, we developed a technology for simulating the performance of a shotgun microphone with high precision.

→See p. 31 for details.

Tweet analysis system using big-data

analysis technology

Example of a media-unifying platform

implemented on smartphones

Example of the synthesis of ball CGs

with real images assuming use for

volleyball

High-luminance 8K HDR liquid crystal

display

Page 5: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 3

User-friendly broadcasting technologiesWe are conducting R&D on technologies for enhancing user-friendly

broadcasting that conveys information promptly and accurately to all viewers including those with vision or hearing impairments and non-native Japanese speakers. As an information presentation technology, we developed a system that automatically generates sign language CGs for explaining the status and rules of games by using competition data delivered during a game and verifi ed the system in sports programs. In our research on speech recognition technology, we developed a system that produces transcripts from speech in video footage by using deep neural networks and an interface that can modify the resulting transcripts efficiently. We conducted evaluation experiments involving news program producers. In our research on audio description technology, we developed an “automated commentary” technology that automatically generates audio commentary describing game information such as athlete names and current scores by analyzing competition data distributed from external parties and provided this service on a special website on NHK Online and Hybridcast.

→See p. 35 for details.

Devices and materials for next-generation broadcastingWe are conducting R&D on imaging, recording and display devices and

materials for SHV and 3D television, which may become the basis of future broadcast services. For imaging technologies, we continued with our research on 3D integrated imaging devices. We prototyped a device with 320×240 pixels (about 50 × 50 μm2) and demonstrated that it can achieve characteristics with a wide dynamic range. In our research on holographic recording technology for achieving a very large capacity and high transfer rate for SHV video recording, we developed a multilevel recording reproduction technology using amplitude modulation, error correction coding based on LDPC and a decoding technology using convolutional neural networks as ways to realize a large capacity with multilevel recording. In our work on a technology for increasing the color purity of OLEDs for a display that can reproduce a wide color range for SHV, we developed a green OLED with an improved color purity with x-y chromaticity coordinates of (0.18, 0.74) by modifying luminescent materials and the device structure.

→See p. 41 for details.

Research-related workNHK STRL promotes the use of its research results on SHV and other

technologies in several ways, including through the NHK STRL Open House, various exhibitions and reports. It also works to develop technologies by forging links with other organizations and collaborating in the production of programs. We contributed to domestic and international standardization activities at the International Telecommunication Union (ITU), Asia-Pacific Broadcasting Union (ABU), Information and Communications Council of the Ministry of Internal Affairs and Communications, Association of Radio Industries and Businesses (ARIB) and various organizations around the world. We exhibited our latest research results such as SHV, currently in its test broadcasting, broadcast technologies utilizing the internet, 3D television and smart production at the NHK STRL Open House 2017. The event was attended by 20,194 visitors. We also held technology exhibitions in Japan and overseas to increase awareness of our research results.

→See p. 47 for details.

Sports sign language CG application

Chromaticity diagram and spectrum of

developed OLED device

STRL Open House 2017

Game video

Sports sign language CG application

Whistle(Device vibration)

Game status

Rule explanation

Game score

Visualization of excitement

Athlete information

Progress of game

Platinum complex + top-emission structure

Conventional device

4K/8K color standard

HDTV color standard

Platinum complex + conventional structure

Page 6: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

4 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

NHK STRL is researching a wide range of tech-nologies in areas related to video, audio and transmission with an eye toward the start of new 4K/8K satellite broadcasting slated for Decem-ber 1, 2018, the future implementation of full-featured 8K Super Hi-Vision (SHV) and the terrestrial broadcasting of 4K/8K.

In our research on video formats, we standard-ized test signals for high-dynamic-range (HDR) program production and investigated the bright-ness of HDR video. We also developed 8K/120-Hz HDR live production equipment and conduct-ed demonstration experiments. In our work on cameras and recording systems, we developed a 1.25-inch 8K image sensor with 33 megapixels that supports high-speed imaging at a 240-Hz frame frequency (with a maximum of 480 Hz). We also prototyped an 8K/240-Hz single-chip color imaging system and a slow-motion player capable of simultaneous 240-Hz recording and 60-Hz reproduction. In addition, we implemented a video compression (ProRes) feature into our compression recorders, increased the speed of our compact memory package and developed a system that gives real-time previews of 8K ProRes fi les on a PC. In our work on displays, we increased the luminance of our 8K sheet-type display composed of four 4K organic light-emit-ting diode (OLED) panels that use thin glass sub-strates and made it operable at a 120-Hz frame rate. We also developed a high-luminance 8K HDR liquid crystal display with a peak luminance of 3,500 cd/m2, more than three times that of conventional displays. To improve the image quality and operability of our projector, we en-hanced a color shading correction processor and downsized the signal processors that drive the liquid crystal devices. Regarding video coding, we continued with our development of an 8K/120-Hz codec that uses High Effi ciency Video Coding (HEVC) and fabricated a prototype capa-ble of real-time operation. We also developed ad-vanced video coding technologies with higher ef-ficiency and proposed some of them at an international standardization meeting. Moreover, we studied the application of machine learning and super-resolution techniques to video coding.

In our work on audio, we improved the perfor-mance of our adaptive downmixing technique, which generates high-quality stereo or 5.1 ch sound signals from 22.2 ch audio signals for the simultaneous production of program audio. We also developed a real-time encoder/decoder for 22.2 ch sound using the MPEG-H 3D Audio LC profi le to investigate an audio coding scheme for next-generation terrestrial broadcasting. Regard-ing reproduction technologies, we studied a way of increasing the robustness of the binaural re-production method and developed a thin loud-speaker using a piezoelectric electroacoustic

transduction fi lm.In our work on transmission technologies, we

investigated the use of MPEG Media Transport (MMT) technology as a multiplexing transmission method for next-generation terrestrial broad-casting. We also continued with our research on an IP multicast delivery technology using MMT, including the demonstration of IP delivery tech-nologies for 4K/8K content and the development of a technology for synchronized presentation on multiple terminals. For the widespread use of satellite SHV broadcasting, we worked to im-prove the 12-GHz-band transmission perfor-mance and prepare the reception environment. We also investigated the next generation of sat-ellite broadcasting such as the 21-GHz band for a larger transmission capacity. Our work includ-ed research on a new transmission scheme, a 12/21-GHz-band dual-polarized antenna and satellite systems. For the terrestrial broadcasting of 4K/8K, we worked on the detailed design and performance improvement of a preliminary standard for next-generation terrestrial broad-casting. We also prepared the environment for main-station-scale experimental transmission stations in Tokyo and Nagoya for large-scale ex-periments to verify the preliminary standard. In addition, we researched the use of space-time coding for single-frequency network (SFN) tech-nology in order to reduce the deterioration of transmission performance, which occurs in an SFN area where radio waves arrive from multiple transmitting stations. Regarding wireless trans-mission technologies for program contributions, we researched a microwave-band field pick-up unit (FPU) with the aim of enabling SHV live broadcasting of emergency reports and sports coverage and worked on its standardization. We a l s o c o n t i n u e d w i t h o u r r e s e a r ch o n 1.2-GHz/2.3-GHz-band FPUs for the purpose of SHV mobile relay broadcasting such as road race coverage. We investigated a rate-matching tech-nique that adaptively controls the coding rate of error correction codes according to the variation in the channel response and demonstrated the mobile transmission of 8K video through field experiments. Regarding wired transmission tech-nologies, we developed 8K IP transmission equipment necessary for IP-based program pro-duction and program contribution systems and researched a technology for interconnection be-tween IP devices with different transmission for-mats and control methods. We also worked to-ward the practical retransmission of 4K/8K broadcasting over cable TV and investigated an in-building transmission system toward the de-velopment of a baseband transmission system, which is a future large-capacity transmission technology.

Page 7: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 5

1 8K Super Hi-Vision

1.2 Cameras

■ 8K 4X high-speed camera and slow-motion playerWe are developing a high-speed camera system and a

slow-motion player to achieve an 8K slow-motion system for sports programs.

For the high-speed camera, we made progress in our devel-opment of an image sensor and capture equipment that sup-port a 240-Hz frame frequency. We prototyped a 1.25-inch CMOS image sensor with 33 megapixels(1) (Figure 1-2). The im-age sensor contains a folding-integration analog-digital con-verter (ADC) with a three-stage pipelined ADC architecture and a digital correlated double sampling (CDS) circuit that sup-presses fl uctuations of the ADC circuit. This makes it possible to support both high-quality capture at a 120-Hz frame fre-quency and high-speed capture at 240-Hz or higher frame fre-quencies (up to 480 Hz). We fabricated an 8K/240-Hz sin-gle-chip color imaging system using the prototype image sensor. We also began developing a three-chip 8K high-speed camera using a color separation optical prism.

For the slow-motion player, we conducted experiments on 8K/240-Hz capturing of slow-motion video using a high-speed

monochrome imaging system and a compression recorder that uses 4:2:0 color sampling(2). We also upgraded a slow-motion system capable of simultaneously recording and reproducing video both at 60 Hz, which we prototyped in FY 2016, to make it able to simultaneously record video at 240 Hz and reproduce it at 60 Hz. This system has two U-SDI input interfaces to sup-port 4:4:4 color sampling. It also uses four signal processing boards, each of which can perform processing at 60 Hz, for the compression circuit and four SATA SSD recording units to sup-port recording at 240 Hz. This system can be externally con-trolled by a general-purpose controller.

Meanwhile, we built an 8K 2X slow-motion system using our previously developed full-featured 8K SHV equipment (i.e., a

1.1 Video systems

■High-dynamic-range televisionWe investigated the operational practice of the production of

high-dynamic-range television (HDR-TV) programs in coopera-tion with program production engineers. As the reference level for the Hybrid Log Gamma (HLG) system, we determined that objects with a refl ectance of 100% and white parts in characters and diagrams should be represented at 75% of the HLG signal level. We proposed a method for mapping SDR signals ranging from 0 to 100% to HLG signals ranging from 0 to 75% on the basis of the reference level to handle standard dynamic range (SDR) video materials in HDR programs. We also studied the luminance range which allows comfortable viewing of HDR-TV programs. The results of subjective evaluation experiments showed that viewers indicate the image on the display is too bright when its average luminance level exceeds 25% of the peak luminance of the display. We refl ected these research re-sults in Reports BT.2390(1) and BT.2408(2) issued by the Interna-tional Telecommunication Union Radiocommunication Sector (ITU-R) and also compiled them in ARIB Technical Report TR-B43(3). In addition, a color bar that we proposed for HDR-TV program production was specified in ITU-R Recommendation BT.2110(4) and ARIB Standard STD-B72(5). Regarding test signals (PLUGE signals) for adjusting the black level of displays, we in-vestigated signal levels suited for the adjustment of HDR-TV displays. Our fi ndings were incorporated into a revised version of ITU-R Recommendation BT.814(6).

We studied a metric for the color volume of HDR/wide-color-gamut displays. We demonstrated that the color volume can be estimated from a combination of the color gamut (the area) on the xy chromaticity diagram based on a conventional colorimetry method and the peak luminance of the display, without the need for complicated 3D volume calculations in a color space(7).

■Full-featured 8K program production systemWe are conducting R&D on program production equipment

and systems that support a 120-Hz frame frequency with the goal of realizing full-featured 8K video production. At the NHK

STRL Open House 2017, we conducted video production exper-iments by connecting cameras, a production switcher, a re-corder, a display, time code equipment and a character super-imposer that we previously developed, and demonstrated the feasibility of live program production with the full-featured 8K video format (Figure 1-1)(8).

As full-featured 8K production equipment, we developed multiple-wavelength transmitting equipment that can transmit uncompressed video and general IP data and advanced devel-opment of a video editing system that can output 8K/120-Hz video and audio in real time.

[References](1) Report ITU-R BT.2390-3, “High dynamic range television for produc-

tion and international programme exchange” (2017)(2) Report ITU-R BT.2408-0, “Operational practices in HDR television

production” (2017)(3) ARIB Technical Report TR-B43 1.0, “Operational guidelines for high

dynamic range video programme production” (2018) (in Japanese)(4) Rec. ITU-R BT.2110-0, “Specification of colour bar test pattern for

high dynamic range television system” (2017)(5) ARIB Standard STD-B72, “Colour Bar Test Pattern for the Hybrid

Log-Gamma (HLG) High Dynamic Range Television (HDR-TV) System (1.0)” (2018)

(6) Rec. ITU-R BT.814-3, “Specifi cations of PLUGE test signals and align-ment procedures for setting of brightness and contrast of displays” (2017)

(7) K. Masaoka: “Rec. 2020 System Colorimetry and Display Gamut Me-trology,” Proc. IDW/AD’17 (2017)

(8) D. Koide et al.: “Full-Featured 8K-UHDTV Program Production Sys-tem -Demonstration Test of Live Program Production-,” ITE Techni-cal Report, vol.41, no.23, BCT2017-68 (2017) (in Japanese)

Figure 1-1. Live program production experiment and production system

Figure 1-2. Prototype 1.25-inch high-speed image sensor

Page 8: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

6 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

120-Hz compact single-chip camera and a compression record-er) and used the system for sports programs such as the NHK Trophy fi gure skating competition.

■Full-featured 8K compact cameraWith the aim of making a full-featured 8K SHV camera more

compact and practical, we are developing a prototype three-chip 8K camera using a 1.25-inch optical system. For effi cient development, we used the same components, such as a sensor drive board and a signal processing board, as those for our 4X high-speed camera.

As for a previously developed133-megapixel full-resolution single-chip camera that operates at a 60-Hz frame frequency, We applied inter-line scanning (interlaced scanning) to enable 120-Hz capture. In the scanning, the skipped lines were inter-polated by the adjoining lines when moving parts and by the prior frame when static pats in the image(3).

Using our previously developed full-featured 8K cameras and single-chip cameras, we recorded full-featured video outdoors, conducted live program production at the NHK STRL Open House 2017 and provided support for the production of NHK Special and other programs.

■Other camera-related technologiesWith the aim of developing a general-use 8K camcoder, we

developed prototype equipment for verifying basic functions that can compress 8K video to 1/80 the file size by using an AVC/H.264 encoder and record video for more than one hour into four SD cards. The compressed video achieved a peak sig-nal-to-noise ratio (PSNR) in excess of 48 dB(4).

To achieve autofocus (AF) capability, we developed an exper-imental imaging device using a hybrid AF system that com-bines phase-difference detection AF with contrast detection AF(5) and exhibited it at the NHK STRL Open House 2017.

We fabricated a dimming element using metal salt precipita-tion-type materials for an electronic neutral-density (ND) fi lter that can control incident light continuously. Reexamining the materials and modifying the drive circuit shortened the re-sponse time (i.e., the time in which the light transmission rate decreases to 1/8 of its original value) to three seconds(6).

We analyzed the principle of the problem of bit depth degra-dation, which occurs when high-chroma objects are captured.

Based on the analysis result, we demonstrated that bit depth reproduction can be improved by changing the processing or-der of the linear matrix and knee and clipping during signal processing within the camera(7).

We developed a system that can precisely measure the two-dimensional spatial resolution characteristics of a TV cam-era in real time and exhibited it at the NAB Show(8).

The research on image sensors was conducted in coopera-tion with Shizuoka University. The research on the electronic ND fi lter was conducted in cooperation with Murakami Corpo-ration.

[References](1) T. Yasue, K. Tomioka, R. Funatsu, T. Nakamura, T. Yamasaki, H.

Shimamoto, T. Kosugi, S. Jun, T. Watanabe, M. Nagase, T. Kitajima, S. Aoyama and S. Kawahito: “A 2.1μm 33Mpixel CMOS Imager with Multi-Functional 3-Stage Pipeline ADC for 480fps High-Speed Mode and 120fps Low-Noise Mode,” 2018 IEEE International Solid-State Circuits Conference (2018)

(2) K.kikuchi, T.Kajiyama, K.Ofura, T.Yamasaki, T.Yasue, R.Funatsu, E.M-iyashita and H.Shimamoto: “Development of 8K240fps compression recorder,” ITE annual Convention 2017, 33E-1 (2017) (in Japanese)

(3) T. Nakamura, T. Yamasaki, R. Funatsu and H. Shimamoto: “An 8K full-resolution 60-Hz/120-Hz multi-format portable camera system,” SMPTE 2017 Annual Technical Conference (2017)

(4) R. Funatsu, T. Kajiyama, T. Matsubara and H. Shimamoto: “Experi-mental Prototype of SD Memory Card Recordable 8K/60P Camcord-er,” IEEE ICCE 2018 (2018)

(5) T. Yamasaki, R. Funatsu, T. Nakamura and H. Shimamoto: “Hybrid Autofocus System by Using a Combination of the Sensor-Based Phase-Difference Detection and Focus-Aid Signal,” IEEE ICCE 2018 (2018)

(6) K.Kikuchi, K.Miyakawa, T.Yasue, H.Shimamoto, T.Mochizuki and M.Makita: “A gradation control method for metal salt precipitation type optical devices,” ITE Winter Annual Convention 2017, 12C-1 (2017) (in Japanese)

(7) K.Nomura, T.yasue, K.Masaoka and Y,Kusakabe: “Improvement of Color Reproduction for HDR/SDR Simultaneous Production Camera,” ITE Annual Convention 2017, 34E-2 (2017) (in Japanese)

(8) K. Masaoka, K. Arai, K. Nomura, T. Nakamura and Y. Takiguchi: “Re-al-Time Measurement of Ultra-High Definition Camera Modulation Transfer Function,” SMPTE 2017 Annual Technical Conference & Ex-hibition (2017)

1.3 Displays

We have made progress in our development of various dis-plays that can handle 8K SHV video and continued with our re-search on large sheet-type displays.

■SHV sheet-type display technologiesWe are developing lightweight, thin and sheet-type organic

light-emitting diode (OLED) displays for future large SHV dis-plays for home use. In FY 2017, we increased the luminance of our sheet-type display composed of four 65-inch 4K OLED pan-els that use thin glass substrates and made it operable at a 120-Hz frame rate. The display achieved high-quality 8K images (Figure 1-3). This display was demonstrated in cooperation with LG Display and ASTRODESIGN, Inc. We plan to develop a display that can show 8K images with a single panel and to re-search a fl exible display that uses a more lightweight and fl exi-ble plastic fi lm substrate.

■ 8K HDR liquid crystal displayWe developed an 8K HDR liquid crystal display that supports

more than three times the luminance of conventional displays

Figure 1-3. Sheet-type OLED display supporting high luminance and 120-Hz

frame rate

Page 9: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 7

1 8K Super Hi-Vision

1.4 Recording systems

We are developing compression recorders and peripheral equipment with the aim of developing full-featured 8K SHV re-cording equipment. In FY 2017, we added a video compression (ProRes) feature to our compression recorder, improved the re-cording and reproduction speed of our small memory package and developed a system that shows real-time previews of re-corded content on a PC(1).

In our work on compression recorders, we implemented a video compression capability into our prototype compression recorder(1) to input/output recorded content via files to/from general-purpose editing software for direct editing. We also en-abled 40-Hz or higher real-time compression processing by us-ing the pipeline processing of three ProRes compression IP cores implemented into one FPGA. A ProRes compression IP core can perform 15-Hz processing of 8K resolution. We imple-mented this system into the three FPGAs on the compression signal processing board, which realized 120-Hz compression. To allow the simultaneous processing of 2K proxy video and 8K video, we incorporated a circuit that switches between 2K images and 8K images at high speed within the IP. The com-bined use of a decoder IP core that we implemented in FY 2016 and the compression IP core that we newly developed achieved the simultaneous recording of 8K video and 2K proxy video and 8K reproduction at 120 Hz. In addition, we implemented support for a general-purpose remote controller to enable op-eration on an outside broadcast (OB) van (Figure 1-5).

Regarding the small memory package, we increased the speed of the NVMe interface which we developed in FY 2016 and implemented support for two slots. We investigated a way of maximizing the device performance of a small memory package with the NVMe interface and found that it is necessary for the host interface to support the simultaneous issuance of multiple commands and a larger transfer data block size. We therefore modifi ed the host interface to support these capabili-ties, which enabled our compression recorder to achieve a re-cording speed in excess of 20 Gbps. We also equipped our small memory package with two slots and enabled it to record twice the number of hours of our prototype recorder fabricated in FY 2016. In addition, we developed backup software for small memory packages to enable data backup in raw data and video formats.

To allow easy previews of the content recorded in the com-pression recorder on a PC, we developed an 8K ProRes re-al-time preview board. Since the decoder IP core that we im-plemented into the compression recorder can operate at 8K/60 Hz if a suffi cient memory bandwidth is secured, we implement-

ed the decoder IP core into an FPGA evaluation board and de-veloped a PC driver. By installing these on a PC, we achieved the real-time preview of recorded video (Figure 1-6).

(Figure 1-4) in cooperation with Sharp Corporation. By employ-ing a technology for increasing the backlight luminance, the display achieved a peak luminance of 3,500 cd/m2 and a dy-namic range of 400,000:1 (both in measured values).

■Full-featured SHV projectorWe improved the image quality of our full-featured SHV pro-

jector that uses red, green and blue laser diodes as light sourc-es and supports a 120-Hz frame frequency. We reduced the lu-minance and color shadings on displayed images caused by the interference of laser beams by increasing the number of bright-ness correction points in the shading correction processor. We also improved the operability by downsizing the signal pro-cessing units that drives the liquid crystal device of the projec-tor and incorporating them into the projector head.

Figure 1-4. High-luminance 8K HDR liquid crystal display

Figure 1-5. Compression recorder and small memory package

Small memory package

Compression recorder

Figure 1-6. PC preview equipment

PC preview equipment

Preview video

Page 10: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

8 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

1.5 Sound systems providing a strong sense of presence

We are researching a 22.2 multichannel sound (22.2 ch sound) system for SHV and working on its standardization.

■SHV sound simultaneous production systemWe are studying technologies to produce high-quality 22.2 ch

sound efficiently and simultaneously while producing stereo and 5.1 ch sound.

In FY 2016, we studied an energy spectrum correction tech-nique for downmixing sound which focuses on the coherence between 22.2 ch audio signals. In FY 2017, we improved its performance and conducted subjective evaluations with sound engineers engaged in program production(1). As Table 1-1 shows, the sound obtained with the maximum amount of cor-rection, both in the suppression process and the amplifying process, was rated as the most appropriate downmixed sound. This demonstrated the sound quality improvement effect of the proposed technique.

We are also studying an upmixing technology for using sound materials recorded in stereo for 22.2 ch sound produc-tion. We developed a technique for separating components ac-cording to the mutual correlation of stereo signals by using an adaptive fi lter to generate 22.2 ch sound materials(2).

■Reproduction of converted SHV soundWe are researching technologies for the easy reproduction of

22.2 ch sound at home. We continued with our research on binaural reproduction using line array loudspeakers. In FY 2017, we devised a design method for a reproduction controller that increases the robustness against system perturbations and external disturbances(3) and implemented it into the signal pro-cessor that we are developing in cooperation with Sharp Cor-poration.

We also proposed a method for the low-order and high-ac-curacy modeling of head-related transfer functions including the characteristics of an expected reproduction environment and developed a signal processor for reproducing 22.2 ch sound with headphones using this method.

■Standard test materials for 3D multichannel stereophonic sound systemsIn FY 2017, the Institute of Image Information and Television

Engineers published Series A of standard test materials for 3D multichannel stereophonic sound systems. Toward this achievement, we contributed to the sound source production conducted by ARIB and prepared an explanatory document de-scribing evaluation items and recording conditions.

■Acoustic devicesWe exhibited the 22.2 ch sound single-unit microphone that

we developed in FY 2016 at the NHK STRL Open House 2017. We also contributed to the production of standard test materi-als for 3D multichannel stereophonic sound system. Using our microphone, we recorded various sounds such as the perfor-mance of octets by wind and string instruments and ambient sound at an amusement park. In addition, we studied beam forming method in order to improve the performance of sepa-ration between channels.

We developed a thin loudspeaker using a piezoelectric elec-troacoustic transduction film that could be applied to loud-speakers for fl at-panel TV and 22.2 ch sound loudspeakers for home use and exhibited it at the NHK STRL Open House 2017. This research was conducted in cooperation with Fujifi lm Cor-poration.

■Audio services for next-generation terrestrial broadcastingWe developed a real-time encoder/decoder for 22.2 ch

sound using the MPEG-H 3D Audio LC profi le(4) to study an au-dio coding scheme for next-generation terrestrial broadcasting. This research was conducted in cooperation with the Fraun-hofer Institute for Integrated Circuits.

We devised a serial form of audio definition model (ADM), which is metadata used for object-based audio, and a method to convey serial ADM. We also prototyped both transmitter and receiver of serial ADMs using a digital audio signal interface.

■StandardizationAt ITU-R, we prepared a Preliminary Draft New Recommen-

Table 1-1. Evaluation result of correction amount in suppression and

amplification processes

Selection rank 1 2 3 4

Suppression amount Max Medium Small None

Amplification amount Max Medium Small None

(A higher rank indicates a higher evaluation.)

Figure 1-7. Appearance of thin loudspeakers

[References](1) T. Kajiyama, K. Kikuchi, K. Ogura, E. Miyashita, M. Tecchikawahara,

H. Watase, Y. Nagai and H. Takashima: “Development of Compres-

sion Recorder for Full-featured 8K Super Hi-Vision,” ITE Journal, Vol.72, No.1, pp.J41-J46 (2018) (in Japanese)

Page 11: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 9

1 8K Super Hi-Vision

1.6 Video coding

We are researching video coding techniques for full-featured 8K SHV and SHV terrestrial broadcasting.

■ 8K 120-Hz HEVC encoderWe are developing an encoder that supports 8K/120-Hz vid-

eo (Figure 1-8). The encoder, which consists of twelve 4K/60-Hz encoding units, is capable of real-time coding of 8K/120-Hz input video by parallel processing. This system conforms to the Main 10 profile using the HEVC/H.265 scheme and supports 4:2:0 10-bit coding of 8K/120-Hz video.

Its bitstream is compliant with ARIB standard STD-B32 Ver-sion 3.9, which allows not only an 8K/120-Hz decoder but also a decoder for 8K test broadcasting (an 8K/60-Hz decoder com-pliant with ARIB standard STD-B32 Version 3.9) to partially de-code the 60-Hz sub-bitstream. The system also supports video usability information (VUI) parameters for HDR that are compli-ant with the above standard.

Since the encoder divides the video frame into four vertical slices for parallel processing, a degradation in image quality tends to appear around the boundaries between divided slices of video, especially when the video has vertical motion. To suppress the degradation, we employed a design that reduces 8K/120-Hz video to 4K/60-Hz video and analyzes the reduced video in advance to control the entire coding. We developed technologies for increasing the image quality, which include a technique to control the quantization value of the boundary ar-eas based on the amount of motion predicted by preliminary analysis. We conducted subjective evaluations using a software simulator to verify the effectiveness of these technologies and confirm the coding quality(1). This research was conducted in cooperation with Fujitsu Laboratories Ltd.

As part of R&D on 120-Hz video coding, we produced evalu-ation images that are helpful for evaluating the performance of the processing of fast-moving images and coding control in co-operation with NTT (Nippon Telegraph and Telephone Corpora-tion).

■ 8K 120-Hz HEVC decoderWe are developing a decoder in parallel with the encoder.

Our decoder consists of a software decoder that operates on a general-purpose workstation and an interface converter. In FY 2016, we implemented a video decoder unit of the software de-coder. In FY 2017, we implemented its audio decoder unit and TS input unit. This enabled the real-time decoding of TS signals of 8K/120-Hz video and 22.2 ch sound. Decoded 8K/120-Hz

video is generated from eight spatiotemporally divided Display-Port outputs and converted by the interface converter to signals for a single U-SDI. Decoded audio signals are multiplexed with video signals for output.

■Development and standardization of next-generation video coding technologiesWe are developing advanced high-efficiency video coding

technologies for next-generation terrestrial broadcasting. For intra-frame prediction technology, we developed a method for the high-precision prediction of chroma signals using decoded luma samples in intra prediction and a method for improving entropy coding for chroma intra prediction modes(2). For in-ter-frame prediction technology, we developed a motion com-pensation method considering the continuity with the motion vector of neighboring blocks and a method for predicting the motion vector adaptively according to the shape of partitioned coding blocks. We also developed a way to improve the entro-py coding of transform coeffi cients by estimating the residual signal energy and a deblocking filter control method that re-duces signifi cant coding degradation in HDR-format video. We confirmed that these technologies improve coding efficiency and proposed some of them to an international standardization conference on next-generation video coding as prospective el-emental technologies for advanced video coding.

To promote the performance improvement of future video coding schemes for HDR video, we provided the JCT-VC and JVET international standardization conferences with test se-

dation on serial ADMs on the basis of a joint proposal by Japan, the US and the UK. At the Society of Motion Picture and Televi-sion Engineers (SMPTE), we proposed a draft of the standard for conveying serial ADMs using AES3 digital audio signal in-terface. At ARIB, we set up a group to study the requirements for next-generation audio services and began activities.

At the Japan Electronics and Information Technology Associ-ation (JEITA) and the International Electrotechnical Commis-sion (IEC), we produced a committee draft for a vote on a standard for transmitting a 22.2 ch sound signal stream encod-ed by MPEG-4 AAC using an optical interface to reproduce 22.2 ch sound at home. Additionally, we contributed to the revision of a standard for transmitting 22.2 ch sound signal stream en-coded by MPEG-4 AAC using an HDMI specified by the Con-sumer Technology Association (CTA).

At JEITA and IEC, we continued with our works to revise a standard for the general channel allocation including 22.2 ch sound system to add channel labels for various multichannel sound systems.

At AES, we contributed to the publication of technical guide-lines which prescribe that each country’s broadcasting rules (i.e., -24LKFS for Japan) should be followed in principle for the target loudness of over-the-top television programs(5).

[References](1) T. Sugimoto and T. Komori: “Tone compensation method for down-

mixing of 22.2 ch sound,” ITE Winter Annual Convention 2017, 12C-5 (2017) (in Japanese)

(2) Y. Sasaki, T. Komori and T. Nishiguchi: “A study of upmix algorithm from stereo to 22.2ch audio,” ITE Winter Annual Convention 2017, 31B-3 (2017) (in Japanese)

(3) K. Matsui, A. Ito, S. Mori, M. Inoue and S. Adachi: “A method to re-lieve the binaural reproduction controller applying output tracking control,” Autumn meeting of the Acoustical Society of Japan, 1-P-31 (2017)(in Japanese)

(4) ISO/IEC 23008-3:2015/AMD3:2017 (2017)(5) “Loudness Guidelines for OTT and OVD Content,” Technical Docu-

ment AESTE1006.1.17-10 (2017)

Figure 1-8. Appearance of 8K/120-Hz encoder

Page 12: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

10 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

quences of the HLG format and also jointly proposed effi cient coding settings for using HEVC as comparative criteria with the BBC. These settings were adopted as the criteria for the perfor-mance comparison of future video coding schemes. They were also refl ected in HEVC video coding guidelines for HDR coding (ISO/IEC TR 23008-15 | ITU-T H. Sup.18)(3)-(5).

As an overseas research effort, we developed HDR tone-mapping nonlinear functions suited for video coding in cooperation with Universitat Pompeu Fabra and demonstrated the improvement in coding effi ciency(6).

■Application of machine learning to coding toolsAs an initial study to explore the feasibility of applying ma-

chine-learning-based coding tools to video coding, we con-ducted a basic evaluation on post-fi lters and intra prediction. We built a post-fi lter using a machine learning method based on convolutional neural networks and demonstrated that it can reduce mosquito noise and improve the PSNR. We also devel-oped an intra prediction tool based on a multilayer perceptron, which is a type of neural network. The results of training and evaluation showed that it is possible to develop a single predic-tor that behaves as if it is equipped with multidirectional and planar prediction modes, indicating the feasibility of a new, effi -cient intra prediction tool.

Also, in cooperative research with Meiji University, we devel-oped an intra prediction process using a neural network con-sisting of two convolutional layers and two fully connected lay-ers and confi rmed that it can increase the speed of a prediction mode that uses prediction samples and adjacent reference samples as an input(7).

■Enhancement of super-resolution technology and its application to video codingIn our research for enhancing super-resolution technologies,

we developed a technique for super-resolution reconstruction from 2K to 8K that uses an alignment and assignment method considering the frequency band by a registration process be-tween wavelet multiscale components. The new technique achieved a higher speed and higher image quality than conven-tional methods(8)(9).

We are also studying a way of applying super-resolution re-construction to video coding technology. As inter-frame predic-tion images, we newly introduced blurred prediction images and super-resolved prediction images that use registration su-per-resolution between wavelet multiscale components and observed improvements(10).

■Noise reduction and band limitation equipmentWe developed noise reduction and band limitation equip-

ment that performs a pre-coding process to increase coding ef-fi ciency. The equipment applies shrinkage functions in each el-ement position after the wavelet-packet decomposition of each frame and controls the amount of shrinkage according to the band limitation frequency and the pixel level, enabling a high-precision noise reduction and band limitation process(11). This research was conducted as a government-commissioned project from the Ministry of Internal Affairs and Communica-tions titled “R&D on Advanced Technologies for Terrestrial Tel-evision Broadcasting.”

[References](1) S. Iwasaki, Y. Sugito, K. Chida, K. Iguchi, K. Kanda et al.: “Subjective

Evaluation of 8K120Hz Encoder Simulator,” Proceedings of the 2017 IEICE General Conference, D-11-7 (2018) (in Japanese)

(2) S. Iwamura, S. Nemoto and A. Ichigaya: “Redundant fl ag removal on chroma intra mode coding,” JVET-H0071 (2017)

(3) S. Iwamura, S. Nemoto, A. Ichigaya and M. Naccari: “Analysis of 4K Hybrid Log-Gamma test sequences,” JVET-F0094 (2017)

(4) S. Iwamura, S. Nemoto and A. Ichigaya: “Candidate rate points of HLG material for anchor generation,” JVET-G0103 (2017)

(5) S. Iwamura, S. Nemoto, A. Ichigaya and M. Naccari: “On the need of luma delta QP for BT.2100 HLG content,” JVET-G0059 (2017)

(6) Y. Sugito et al.: “Improved High Dynamic Range Video Coding with a Nonlinearity based on Natural Image Statistics,” International Journal of Signal Processing Systems, Vol.5, No.3, pp.100-105 (2017)

(7) T. Toyozaki, Y. Shishikui and S. Iwamura: “A Study on intra predic-tion mode decision method using deep learning,” Proceedings of the 2017 IEICE General Conference, D-11-56 (2017) (in Japanese)

(8) Y. Matsuo and S. Sakaida: “Super-Resolution for 2K/8K Television by Wavelet-Based Image Registration,” Proceedings of IEEE GlobalSIP, GS IVM-P.1.4, pp. 378-382 (2017)

(9) Y. Matsuo, A. Ichigaya and K. Kanda: “Super-Resolution from 2K to 8K by Registration of Wavelet Multi-Scale Components,” Picture Coding Symposium of Japan 2017 (PCSJ2017), P-2-15, pp. 86-87, (2017) (in Japanese)

(10) Y. Matsuo and S. Sakaida: “Coding Efficiency Improvement by Wavelet Super-Resolution Restoration for 8K UHDTV Broadcasting,” Proceedings of IEEE ISSPIT (2017)

(11) Y. Matsuo, K. Iguchi and K. Kanda: “Coding Effi ciency Improvement by Band-Limitation Equipment for Advanced Digital Terrestrial TV Broadcasting System,” Proceedings of the 2017 ITE Winter Conven-tion, 14C-4, (2017) (in Japanese)

1.7 Media transport technologies

We are conducting R&D on the use of MPEG Media Trans-port (MMT) technology as a multiplexing transmission method for next-generation terrestrial broadcasting. We are also con-ducting research on IP multicast delivery technology using MMT, in which we demonstrated the IP delivery of 4K/8K live content and presented a new viewing experience using in-ter-terminal synchronization technology.

■Multiplexing transmission method for next-generation terrestrial broadcastingAiming for next-generation terrestrial broadcasting, we re-

searched a multiplexing scheme for IP packets that conforms to the channel coding system for terrestrial broadcasting and an IP transmission system used over studio to transmitter links (STLs) and transmitter to transmitter links (TTLs) to enable a single-frequency network (SFN). We compiled our fi ndings into specifications and conducted verifications with a prototype

remux(1). More specifically, we performed experiments at the Hitoyoshi and Mizukami experimental stations in Kumamoto Prefecture on transmitting output signals from the remux to multiple modulators over commercial IP networks. The results demonstrated that an IP transmission-based SFN can be built by using signaling information in the signals for synchroniza-tion control. To improve the quality of mobile services in next-generation terrestrial broadcasting, we proposed a broad-cast signal complementary system that allows continuous pro-gram viewing even when broadcast waves are interrupted dur-ing travel by seamlessly switching to the reception by mobile communications. The results of fi eld experiments (Figure 1-9) demonstrated the feasibility of the system(2)(3).

Part of this research was conducted as a government-com-missioned project from the Ministry of Internal Affairs and Communications titled “Research and Development for Ad-vanced Digital Terrestrial Television Broadcasting System.”

Page 13: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 11

1 8K Super Hi-Vision

1.8 Satellite broadcasting technology

We are researching 12-GHz-band satellite broadcasting sys-tem to improve the transmission performance for 8K UHDTV broadcasting, and researching next-generation satellite broad-casting systems such as 21-GHz-band satellite broadcasting for future broadcasting services.

■Advanced transmission system for satellite broadcastingWe are researching 64APSK (Amplitude Phase Shift Keying)

coded modulation using set partitioning in order to further in-crease the capacity of satellite transmission. As a way of im-

proving the transmission performance through a channel with nonlinear distortion caused by the satellite transponder, we de-signed 64APSK coded modulation that considered the charac-teristics of the nonlinear distortion on the satellite and the per-formance of an adaptive equalizer using the LMS (Least Mean Squares) algorithm on the receiver side. Using the number of constellation points on the circles of 64APSK modulation as parameters, we designed the number of the points, bit alloca-tion to signal points and LDPC(Low Density Parity Check) cod-ing that obtain the best required carrier-to-noise ratio (C/N) af-ter error correction under the condition of an output back-off of 5 dB, which is the optimum operation point of a 12-GHz-band satellite transponder during 64APSK transmission. Com-puter simulations showed that the designed 64APSK coded modulation (our proposed method)(1) improved the required C/N by approximately 0.4 dB compared with the conventional method optimized by considering only AWGN (Additive White Gaussian Noise), when the output back-off was 5 dB (Figure 1-10).

We prototyped a cross-polarization interference cancellation device equipped with a negative-phase synthesized algorithm that reduced the deterioration in transmission performance caused by the simultaneous reception of right- and left-hand circularly polarized waves. We demonstrated that this interfer-ence cancellation function improved the required C/N by 0.2 dB when the cross-polarization discrimination was 25 dB and the modulation scheme of the desired waves was 32APSK (3/4).

■Advanced satellite broadcasting systemsWith the aim of increasing the capacity of 12-GHz-band sat-

ellite broadcasting by using multilevel coded modulation, we

■MMT-based IP multicast delivery technologyTo promote 8K SHV broadcasting, we verifi ed MMT-based IP

multicast delivery technology that could be used for the IP re-transmission of broadcasting in closed networks of cable TV

stations and other service providers and for the IP delivery of the relevant content linked with broadcasting. We conducted a delivery experiment using live content from the ISU Grand Prix of Figure Skating 2017/2018, NHK Trophy, and demonstrated that 4K/8K content can be delivered to multiple content deliv-ery service providers simultaneously with low latency. As an example of the application of a high-accuracy synchronization scheme using an absolute time stamp, which is a feature of MMT, we developed an inter-terminal synchronization technol-ogy that allows multiple content delivered by IP multicast dis-tribution to be displayed in synchronization without delay by adjusting the timing to display video at the same time with clock synchronization among multiple reception terminals. We exhibited interactive content, “Domo’s Slapstick Race,” using this technology at the NHK STRL Open House 2017, CEATEC JAPAN 2017 and NHK Science Stadium 2017, demonstrating the feasibility of a new viewing experience(4).

[References](1) S. Aoki et al.: “A Study on IP Multiplexing Scheme in Next-generation

Terrestrial Broadcasting System,” ITE Annual Convention, 14C-1 (2017) (in Japanese)

(2) Y. Kawamura et al.: “Field Experiment of Hybrid Video Delivery Using Next-Generation Terrestrial Broadcasting and a Cellular Network,” IEEE International Conference on Consumer Electronics 2018, pp.173-174 (2018)

(3) Y. Kawamura et al.: “An Implementation of a Cooperative Broadcast and Cellular Network System for Mobile Reception of Next-Genera-tion Terrestrial Broadcasting,” ITE Annual Convention, 14C-4 (2017) (in Japanese)

(4) Y. Kawamura: “Inter-Terminal Synchronization Technology Using MMT,” Cable New Era, vol.4, no.7, p.47 (2017) (in Japanese)

Image: Google

A

B

Shielded area

for broadcast

radio

A

B

enabled disabled

(unable to receive)

shieldedarea

coveragearea

Hybridreception

Route of

vehicle

Figure 1-9. Field experiment using an complementary system

1.E-11

1.E-10

1.E-09

1.E-08

1.E-07

1.E-06

1.E-05

1.E-04

1.E-03

1.E-02

1.E-01

16.7 16.8 16.9 17.0 17.1 17.2 17.3 17.4 17.5 17.6

C/N[dB]

Bit e

rror

rate

Proposed methodConventional method

Figure 1-10. Transmission performance of 64APSK coded modulation

Page 14: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

12 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

1.9 Terrestrial broadcasting transmission technology

For the terrestrial broadcasting of SHV, we made progress in our R&D on a next-generation terrestrial broadcasting system, the establishment of a large-scale experiment environment, channel planning and next-generation single-frequency net-work (SFN) technology. Part of this research was being per-formed under the auspices of the Ministry of Internal Affairs and Communications, Japan as part of its program titled “Re-search and Development for Advanced Digital Terrestrial Tele-vision Broadcasting System,” in cooperation with Sony Corpo-ration, Panasonic Corporation, Tokyo University of Science and NHK Integrated Technology Inc.

■Next-generation terrestrial broadcasting systemWe worked on the detailed design and performance im-

provement of a preliminary standard for next-generation ter-restrial broadcasting. In FY 2017, we designed low-density par-i ty-check (LDPC) codes, investigated a hierarchical transmission method, reexamined the transmission and multi-plexing confi guration and control (TMCC) transmission scheme and evaluated transmission characteristics of the preliminary standard through computer simulations.

Regarding the LDPC codes, we redesigned some of the 69,120-bit-length codes (long codes) that we designed in FY 2016 for the preliminary standard and also newly designed 17,280-bit-length codes (short codes). By using an appropriate

(multiedge type: MET) structure for codes with a low coding rate, we demonstrated that both long codes and short codes with any coding rate (2/16 to 14/16) can achieve the same or better performance than ATSC 3.0 (Figure 1-12).

For the hierarchical transmission method, we investigated

are investigating a way of increasing the output power of the broadcasting satellite transmission. If the output power is in-creased, the side lobes of an on-board antenna need to be sup-pressed to keep radio wave interference to other countries be-low the level agreed by international adjustment. To develop an on-board 12-GHz dual-polarized refl ector antenna with low side lobes, which is capable of separately receiving right- and left-hand circularly polarized waves, we selected and proto-typed a corrugated horn antenna to be used as the feeder. The radiation patterns of our prototype feeder agreed with the de-signed values and a cross-polarization discrimination of 30 dB or more was obtained in the 300-MHz bandwidth. We plan to design a dual refl ector antenna using two noncircular aperture reflectors based on the designed feeder to further reduce the side lobes of an on-board antenna.

We designed a 12/21-GHz-band dual-polarized feeder that allows a single antenna to receive both 12-GHz-band and 21-GHz-band satellite broadcasting by right- and left-hand cir-cular polarization. The feeder has a multilayer structure of four-element microstrip array antennas, which enables the si-multaneous arrangement of the focal point of the refl ector (Fig-ure 1-11). The feeder obtained a VSWR (Voltage Standing Wave Ratio) of 1.1 or less for both frequency bands. An evaluation using the feeder design values showed that an offset parabolic refl ector antenna with 50 cm aperture diameter achieves gains more than 34 dBi for the 12-GHz band and 38 dBi for the 21-GHz band as well as a cross-polarization discrimination more than 25 dB.

Weak signals in the intermediate-frequency range (2.2 GHz to 3.2 GHz) for left-hand circular polarization leak from a re-ception system for 12-GHz-band 4K/8K satellite broadcasting. To measure these signals, we devised a method for improving the C/N of the measured signals (leaked signals) by correlation processing of the leaked signals and received signals, and we prototyped and evaluated a measurement tool capable of the high-precision measurement of the leaked power. We con-fi rmed by the prototype that the C/N can be improved by 40 dB or more by limiting the band of correlation output signals with a narrow-band pass fi lter.

As a feeder that increases the output satellite transmission

power by spatial synthesis using a 21-GHz-band array-fed re-fl ector antenna, we prototyped a three-element partial model using a sequential array structure in which horn antennas are arranged with rotational symmetry. We confirmed that a cross-polarization discrimination 30 dB or more is obtained by reducing radio waves reflected inside the neighboring ele-ments.

To conduct a wide-band transmission experiment using a 21-GHz-band experimental transponder of the BSAT-4a broad-casting satellite and evaluate rain attenuation characteristics in the 21GHz band, we set up 21-GHz-band reception equipment that combines a parabolic antenna with 1.5 m aperture diame-ter (48 dBi gain) and an automatic satellite-tracking system.

[References](1) Y. Koizumi, Y. Suzuki, M. Kojima, H. Sujikai and S. Tanaka: “Study on

the optimization of 64APSK coded modulation design under the nonlinear transmission path simulating the satellite transponder characteristics,” Proceedings of the 2018 IEICE General Conference, B-3-10 (2018) (in Japanese)

50mm

50mm

12-GHz-band MSA

Appearance of feed antenna

MSA: Microstrip antenna 12/21-GHz-band feed circuit

Ground substrate

Multilayer structure

21-GHz-band MSA

21-GHz-band MSA20-GHz-band MSA

21-GHz-band feed circuit

12-GHz-band 0.6 wavelength (14.7 mm)

21-GHz-band 0.5 wavelength (6.9 mm)

Figure 1-11. Structure of 12/21-GHz-band dual-polarized feed antenna

ATSC3.0

National project for advanced technology

QPSK transmission capacity

2

1.5

0.5

-8 -6 -4 -2 0 2 4 6 80

1

Tra

nsm

issio

n r

ate

(bps/H

z)

Long codes

Short codes

Required C/N (dB)

Figure 1-12. Relationship between required C/N and transmission rate

Page 15: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 13

1 8K Super Hi-Vision

layered division multiplexing (LDM) that transmits two signals with different transmission robustness after mapping and add-ing them(1). We implemented a system to apply LDM to the whole signal bandwidth and a system to apply LDM only to the partial reception band into our modulator and demodulator. We also prototyped a demodulator for mobile reception that re-ceives only the partial reception band (1.5-MHz bandwidth) and uses a sampling rate reduced to 1/4 that of a conventional de-vice with the aim of reducing the power consumption of a fre-quency-division multiplexing (FDM) receiver. Moreover, as a technology for improving the characteristics, we implemented the TMCC transmission scheme using differential space fre-quency block codes (DSFBCs) and a frequency interleaver that performs a cyclic shift in segment units for each OFDM symbol to increase the frequency diversity gain.

Using a modulator and demodulator that we developed in FY 2016, we evaluated the reception characteristics of the partial reception band through laboratory experiments and fi eld exper-iments with an STRL experimental station(2). As a result of vari-ous modifications including the use of LDPC codes for er-ror-correcting codes, the expansion of the partial reception bandwidth to approximately 3.5 times that of One-Seg for ter-restrial TV broadcasting and the provision of a longer time-in-terleave length option, the required fi eld strength was reduced by 2.6 dB and the transmission effi ciency was improved by 60% compared with those of One-Seg (Figure 1-13). We reflected this partial reception capability compliant with the preliminary standard in a channel coding system for sponsored research.

■Construction of large-scale experimental environment for next-generation terrestrial broadcastingAs part of the program titled “Research and Development for

Advanced Digital Terrestrial Television Broadcasting System,” we prepared for setting up experimental transmission stations

in Tokyo and Nagoya. In FY 2017, we designed a transmitter and developed a transmission antenna that will be used at the experimental station in Tokyo. For Nagoya, we designed two experimental transmission stations (one with the size of a main station and the other with the size of a relay station) and devel-oped a transmission antenna for the main station and a trans-mitter for the relay station.

In large-scale experiments to be conducted in both areas, we plan to use signals with an extended bandwidth than that of the current terrestrial TV broadcasting. We conducted laborato-ry experiments to confi rm the permissible values of co-channel interference and adjacent channel interference for the signals with the extended bandwidth. We investigated the co-channel and the adjacent channel protection ratios for the extended bandwidth signal through laboratory experiments. We assessed the protection ratios for the current terrestrial TV broadcasting signals interfered with by the extended bandwidth signal using 15 kinds of terrestrial TV receivers(3).

To obtain licenses in both areas, we provided the relevant broadcasters with a prior explanation of the transmission spec-ifi cations of the experimental stations and their possible impact on terrestrial TV broadcasting and gained approval for the transmission specifi cations (Table 1-2). We applied to the Re-gional Bureaus of Telecommunications for an experimental station license with the goal of starting radio emission in the autumn of 2018.

■Channel planningSince FY 2016, we have been engaging in channel planning

in cooperation with the Engineering Administration Depart-ment with the aim of enabling terrestrial SHV broadcasting that uses the same UHF band as current terrestrial TV broadcasting. In FY 2017, we studied new channels for terrestrial SHV broad-casting and frequency reallocation of existing digital terrestrial broadcasting. We reviewed selection criteria for determining the availability of channels and increased the number of calcu-lation points, which improved the accuracy of the repacking scale estimation.

■Next-generation SFN technologyThe remux equipment that we developed in FY 2016 has the

capability of sending eXtensible Modulator Interface (XMI) sig-nals that will be entered into an OFDM modulator. It therefore can control the signal output timing of the modulator according to each transmitting station’s transmission timing information,

2

1.5

0.5

48 4946 4744 4542 43 50 51 520

1

QPSK(5/15)

(8/15)(9/15)(5/15)

(8/15)

(9/15)

16QAM

Mobile reception environment (street driving)

Preliminary standardPreliminary standard Improved by 60%Improved by 60%

Reduced by 2.6 dBReduced by 2.6 dB

Tra

nsm

issi

on

eff

icie

ncy

[b

ps/

Hz]

One-Seg QPSK (2/3)

( ): Code rate

Required field strength [dBμV/m]

Figure 1-13. Comparison of required field strength between the preliminary

standard and One-Seg

Modulator

Modulator

Transmit according to timing information

Video/Sound Remux Deliver in XMI formatOptical IP network

Timing informationHitoyoshi experimental station Mizukami experimental station

Figure 1-14. System diagram of SFN transmission experiment

Table 1-2. Transmission specifications of experimental stations

Tokyo Nagoya

Type Main station Main station Relay station

Location Minato Ward, Tokyo Nagoya City, Aichi Yatomi City, Aichi

Transmission channel UHF ch28 UHF ch35 UHF ch35

Polarization Horizontal and vertical (Dual-polarized MIMO)

Transmission PowerHorizontal 1 kW

Vertical 1 kWHorizontal 1 kW

Vertical 1 kWHorizontal 10 W

Vertical 10 W

Page 16: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

14 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

which is multiplexed with XMI packets, to enable an SFN.In FY 2017, we conducted an SFN fi eld experiment over an

optical IP network in Hitoyoshi City, Kumamoto Prefecture. In the experiment, XMI packets sent from a remux installed at the Hitoyoshi experimental station were delivered to the Mizukami experimental station over the optical IP network. Then, OFDM signals generated by the modulators at the Hitoyoshi and Miz-ukami stations were transmitted from the two stations using the same frequency (UHF ch46). We confi rmed at a reception point set up in a place where radio waves from the two sta-tions arrive that video was successfully transmitted without er-rors in an SFN environment. We also observed the difference in the arrival time of radio waves from the two stations and con-fi rmed the operation of the transmission timing control func-tion (Figure 1-14).

We researched the use of space-time coding (STC) for SFN technology (coded SFN) to reduce the deterioration of trans-mission characteristics, which occurs when radio waves arrive from multiple transmitting stations in an SFN area. Computer simulations using channel characteristics collected in areas where an SFN is formed for terrestrial digital broadcasting showed that using the coded SFN technology can improve transmission characteristics by up to 4.8 dB even in cases where the characteristics degrade with a conventional SFN (Figure 1-15)(4).

■ International collaborationITU-R WP6A (Terrestrial broadcasting delivery) is preparing a

report titled “Collection of fi eld trials of UHDTV over DTT net-works.” In FY 2017, we proposed adding new information about terrestrial transmission experiments using a non-uniform constellation and 8K transmission experiments in an SFN envi-ronment using HEVC/H.265 compression.

As part of research toward next-generation terrestrial broad-casting, we enrolled in the 3rd Generation Partnership Project (3GPP), which standardizes mobile communication systems, and began investigating the standardization trend of 5G. We also began a study on the use of 5G systems for broadcasting in cooperation with the European Broadcasting Union (EBU).

We visited KBS and ETRI of South Korea to investigate the situation of terrestrial 4K broadcasting, which was started on May 31, 2017, and future mobile services.

We conducted a questionnaire-based survey about emergen-cy warning broadcasting at the Future of Broadcast Television (FOBTV), where broadcasters around the world gather, and re-ported the collected results at meetings of the NAB Show in April and IBC 2017 in September.

As part of activities at the Digital Broadcasting Experts Group (DiBEG), we exchanged opinions about next-generation terres-trial broadcasting with SBTVD-Forum, a standardization organ-ization in Brazil.

[References](1) A. Sato et al. (NHK): “A Study on Applying Layered Division Multi-

plexing for Next Generation Digital Terrestrial Broadcasting,” ITE Tech. Rep., Vol.41, No.6, BCT2016-34, pp.45-48 (2017) (in Japanese)

(2) H. Miyasaka et al. (NHK): “A Study on the Partial Reception for the Proposed Specifi cation of the Next Generation Terrestrial Broadcast-ing,” ITE Annual Convention 2017, 14C-3 (2017) (in Japanese)

(3) N. Shirai et al. (NHK): “A study on Bandwidth Extension for the Pro-posed Specifi cation of the Next Generation Terrestrial Broadcasting,” ITE Tech. Rep., Vol. 42, No. 11, BCT2018-48, pp.43-46 (2018) (in Japa-nese)

(4) A. Sato et al. (NHK): “Transmission Performance Evaluation of SFN Technology with Space Time Coding,” ITE Tech. Rep., Vol. 41, No. 43, BCT2017-92, pp.37-42 (2017) (in Japanese)

1.10 Wireless transmission technology for program contributions (FPU)

With the goal of realizing SHV live broadcasting of emergen-cy reports and sports coverage, we are conducting R&D on fi eld pick-up units (FPUs) for transmitting video and sound ma-terials. In FY 2017, we researched a microwave-band FPU and a 1.2-GHz/2.3-GHz-band FPU.

■Microwave-band FPUWe researched a microwave-band (6/6.4/7/10/10.5/

13-GHz-band) FPU that can transmit SHV video signals over a distance of 50 km, which is the same as the transmission dis-tance of current FPUs for Hi-Vision, and worked toward its standardization.

We previously employed dual-polarized multiple-input multi-

Received power ratio CA/CB of waves from two stations (Stations A and B) constituting an SFN [dB]

16

18

20

22

24

26

0 5 10 15 20 25 30 35 40 45

4.8dB

Required C

A/N

[dB

]Conventional SFN (Two-wave model delay time of 10μs)

Coded SFN (Two-wave model delay time of 10μs)

Conventional SFN (Collected channel characteristics)

Coded SFN (Collected channel characteristics)

Figure 1-15. Comparison of transmission characteristics between coded SFN and conventional SFN

Page 17: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 15

1 8K Super Hi-Vision

ple-output (MIMO) technology and orthogonal frequency-divi-sion multiplexing (OFDM) with a higher-order modulation scheme to expand the transmission capacity(1). Additionally, in FY 2017, we quadrupled the FFT size from 2,048 points to 8,192 points to increase the ratio of the effective symbol duration to the guard interval. We also introduced a non-uniform constel-lation, optimized the OFDM pilot signal level(2) and improved the bit interleave and also used LDPC codes for error correction to reduce the required C/N. These improvements resulted in a maximum transmission capacity of 312 Mbps, 5.7 times that of current FPUs (55 Mbps), and a transmission capacity of 200 Mbps, 3.6 times that of current FPUs, under the same required C/N (Figure 1-16).

At ARIB, we contributed to the establishment of a new standard incorporating the above technologies, ARIB STD-B71 “Portable Microwave Band OFDM Digital Transmission System for Ultra High Defi nition Television Program Contribution.”

■ 1.2-GHz/2.3-GHz-band FPUTo enable the mobile relay broadcasting of SHV signals by

using the 1.2-GHz/2.3-GHz-band, we are researching a MIMO system with adaptive transmission control using the time divi-sion duplex (TDD) scheme.

In FY 2017, we improved functions to expand the transmis-sion rate. For uplinks that transmit SHV video signals from a

mobile station to a base station, we increased the amount of information that can be transmitted per OFDM carrier symbol from 14 bits to 16 bits and improved the time ratio of uplinks to downlinks that transmit signaling information from a base sta-tion. This achieved wireless transfer at a maximum rate of about 140 Mbps. We also developed a function for supporting multiple base stations to expand the transmission service area for a road race relay. This function selects four antennas hav-ing a good reception quality out of many antennas and uses them for demodulation.

In our work on rate matching, which controls the coding rate of error correction codes adaptively according to the varying channel quality to prevent transmission errors, we implement-ed a function to control the coding bit rate of 8K video accord-ing to the variation in the error correction coding rates into a prototype system. We connected the system with an HEVC/H.265 codec with a variable rate and demonstrated that it can transmit SHV video signals at a rate varying in the range from 50 Mbps to 140 Mbps.

Using this prototype system (Figure 1-17), we conducted fi eld transmission experiments with assumed marathon courses around our laboratory and in urban areas. The experiments evaluated the mobile transmission characteristics of a MIMO system with adaptive transmission control, which dynamically changes the number of MIMO streams to be multiplexed, the modulation scheme and the error correction coding rate ac-cording to the channel status, and demonstrated the mobile transmission of SHV video signals in excess of 100 Mbps by the system connected with an 8K codec with a variable rate(3).

Part of this research was conducted as a government-com-missioned project from the Ministry of Internal Affairs and Communications titled “R&D on Highly Effi cient Frequency Us-age for the Next-Generation Program Contribution Transmis-sion.”

[References](1) K. Murase, H. Kamoda, K. Shibuya, N. Iai and H. Hamazumi: “Micro-

wave Link for 4K and 8K Broadcasting Program Contribution”, ITE Annual Convention 2017, 32E-1 (2017) (in Japanese)

(2) K. Murase, H. Kamoda, K. Shibuya, N. Iai, K. Imamura and H. Hama-zumi: “A Study on OFDM Pilot Symbol Level for 4K and 8K Micro-wave Link for Program Production”, ITE Technical Report, Vol. 42, No. 5, BCT2018-34, pp. 41-44 (2018) (in Japanese)

(3) K. Mitsuyama, F. Uzawa, F. Ito and N. Iai: “Outdoor Field Trial of 4×4 TDD-SVD-MIMO System with Adaptive Transmission Control,” ITE Technical Report, Vol.41, No.35, BCT2017-84, pp.13-16 (2017) (in Jap-anese)

3.6 times

Required C/N [dB] *R: Coding rate

Tra

nsm

issio

n c

ap

acity [

Mb

ps]

0

50

100

150

200

250

300

350

5 10 15 20 25 30 35

4096QAM

1024QAM

256QAM

64QAM

32QAM

16QAM

Current 64QAM

Current 32QAM

Current 16QAM

Current QPSK

1024QAM R=2/3

256QAMR=5/6

4096QAM R=5/6

64QAM R=5/6

5.7 times

Current FPU

Current 64QAM R=5/6

Figure 1-16. Transmission capacity and required C/N of microwave-band FPU

Equipment on the mobile station side Equipment on the base station side

Power amplifierPower amplifier Channel emulator

Transmission and reception control unit Transmission and reception control unit

Transmission and reception radio frequency unit

Transmission and reception radio frequency unit

Figure 1-17. Prototype MIMO system with adaptive transmission control

Page 18: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

16 | NHK STRL ANNUAL REPORT 2017

1 8K Super Hi-Vision

1.11 Wired transmission technology

We are researching a program production and program con-tribution system using Internet Protocol (IP) technology that can be used for 8K programs. We are also studying a channel bonding technology for transmitting 8K programs over cable TV networks and the FTTH (Fiber to the Home) digital base-band transmission system.

■ IP-based program production and program contribution systemApplying IP technology to program production and program

transmission allows signals of various formats for video, sound, synchronization and control to be temporally multi-plexed and transmitted over a shared network at a low cost. In FY 2017, we worked on the following three R&D areas:

(1) Experiment on remote audio mixing using 8K IP transmis-sion

While conventional live program production requires a con-version process to synchronize signals received from a venue with the broadcast station’s master clock, an IP-based program production system can use the same clock for the venue and the broadcast station by exchanging clock synchronization in-formation between them using Precision Time Protocol (PTP). We verifi ed the synchronization performance in the ISU Grand Prix of Figure Skating 2017/2018, NHK Trophy, by time-multi-plexing and transmitting 8K video packets, 128 ch sound pack-ets and PTP packets over commercial IP networks between Osaka and Tokyo. The results showed that sound from a live coverage venue in Osaka can be mixed at the NHK Broadcast Center in Tokyo if the jitter of PTP packets is about one micro-second(1). Meanwhile, some production devices failed in clock synchronization. We plan to improve the synchronization algo-rithm and consider a way of reducing the PTP jitter, which var-ies with the network structure.

(2) Development of IP transmission equipment for mezza-nine-compressed 8K signals

8K program contributions used for program production need to be transmitted with high quality and low latency. However, transmitting a large amount of uncompressed signals incurs a higher network cost. We therefore developed equipment that applies mezzanine compression to 8K signals and transmits them over IP networks. This equipment can transmit 8K video at a reduced bandwidth with low latency while maintaining a high image quality comparable to that of uncompressed sig-nals. For example, 8K program contribution signals (4:2:2 sam-pling, 60-Hz frame rate, 40-Gb/s video bandwidth) that are mezzanine-compressed to 1/5 (post-compression video band-width of 8 Gb/s) can be transmitted over a single cable for general 10 Gb Ethernet. Laboratory experiments demonstrated that the equipment can transmit mezzanine-compressed 8K signals stably with high image quality and low latency (Figure 1-18). We plan to conduct fi eld transmission experiments, sup-port a 120-Hz frame rate and implement an error correction function(2).

(3) Development of an IP transmission system converterFor the interconnection between devices that use different

signal transmission formats and equipment control methods in an IP-based program production system, we developed a mechanism for converting the format and control method and fabricated a prototype converter.

The IP video router (IPVR) system being developed in NHK as a network matrix uses a different transmission format and control method from those of devices for IP-based program production systems. Using our prototype converter, we con-ducted a connection test between the IPVR and an IP program production system that we prototyped in FY 2017. The results demonstrated that the control device of the IP program produc-tion system can control the IPVR and that IPVR video signals can be transmitted to the IP program production system.

■Cable TV transmission of SHV signalsWe continued with our R&D on a channel bonding technolo-

gy to transmit partitioned 8K signals over multiple channels so that 8K programs can be distributed through existing coaxial cable television networks. In FY 2016, we developed a compact receiver equipped with a demodulator LSI that supports chan-nel bonding technology. In FY 2017, we conducted experiments on the retransmission of 8K satellite broadcasting over com-mercial cable networks using this receiver. The results showed that our compact receiver can receive retransmitted 8K signals stably. We also helped Japan Cable Laboratories compile re-quirements for a demonstration experiment on the retransmis-sion operation specifications for new 4K/8K satellite broad-casting and prepare experimental procedures with the aim of realizing a 4K/8K retransmission service on cable TV.

■Digital baseband transmission system for FTTHAs a way of distributing broadcasts to homes using FTTH, we

are studying a 10-Gbps-class digital baseband transmission system that divides multichannel streams of 8K and Hi-Vision broadcasting into IP packets and multiplexes them with base-band signals by using time-division multiplexing. In FY 2017, we studied an intra-building transmission system for condo-minium buildings installed only with coaxial cables, which can-not transmit baseband signals. The intra-building transmission system selects IP packets in response to a viewer request and converts them to radio frequency (RF) signals so that they can be transmitted over an existing coaxial cable. We prototyped a transmitter and receiver which comply with Data Over Cable Service Interface Specifi cation (DOCSIS) used for internet com-munications on cable TV and added a function to improve the frequency usage effi ciency and demonstrated the effectiveness of the system. We also investigated a way of migrating the ex-isting FTTH equipment for RF signal transmission to the digital baseband transmission system in stages(3).

[References](1) M. Kawaragi, T. Koyama, J. Kawamoto, S. Kitajima and T. Kurakake:

“Study on Synchronization Method for IP Remote Production,” Vol. 42, No. 11, BCT2018-39, pp. 5-8 (2018) (in Japanese)

(2) J. Kawamoto and T. Kurakake: “XOR-based FEC to Improve Burst-Loss Tolerance for 8K Ultra-High Definition TV over IP Transmis-sion,” IEEE GLOBECOM2017, CSSMA. 4-05 (2017)

(3) T. Kusunoki, Y. Hakamada and T. Kurakake: “A study for coexistence transmission of SCM and 10Gbps baseband video signals over FTTH networks,” BCT2017-90, pp. 45-48 (2017) (in Japanese)

Transmitter

Receiver

8K video after IP transmission with mezzanine compression

Figure 1-18. Laboratory experiment on the IP transmission of mezza-

nine-compressed 8K signals

Page 19: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 17

1 8K Super Hi-Vision

1.12 Domestic standardization of broadcasting systems

We are engaged in domestic standardization activities relat-ed to 4K and 8K ultra-high-defi nition television satellite broad-casting systems.

For the start of new 4K/8K satellite broadcasting in 2018, the Association for Promotion of Advanced Broadcasting Services (A-PAB) has been preparing operational guidelines(1). ARIB worked on revisions of its technical standards that specify the television systems in cooperation with A-PAB (Table 1-3). Members of NHK STRL contributed to these standardization ef-

forts for ultra-high-defi nition television broadcasting by partici-pating as committee chairperson of an ARIB development sec-tion and managers and members of the relevant ARIB working groups.

[References](1) ARIB Technical Report TR-B39 1.7, “Operational guidelines for ad-

vanced digital satellite broadcasting” (2018)(in Japanese)

Table 1-3. Major revisions of the ARIB standards for ultra-high-definition television satellite broadcasting systems

Domain ARIB Standard Major revisions

Multiplexing (MMT/TLV)

STD-B60 (ver. 1.12) Change in the specification for an HEVC video descriptors supporting HDR/wide color gamut and clarification of MMT specifications

Conditional access STD-B61 (ver. 1.4) Specification of the number of components (e.g., video, audio and data) and scramble keys that can be simultaneously processed by a receiver

Multimedia coding STD-B62 (ver. 1.9) Addition of the specification for handling ideographic variants, clarification of the receiver reference model, and clarification of the communication capability

Receiver STD-B63 (ver. 1.7) Addition of the specification for digital video and audio output supporting HDMI 1.4b and 2.1, and specification of performance requirements such as the number of components (e.g., video, audio and data) and scramble keys that can be simultaneously processed by a receiv-er and

Page 20: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

18 | NHK STRL ANNUAL REPORT 2017

With the goal of developing a new form of broadcasting, we are conducting R&D on the integral method and the holographic method for a spatial imaging three-dimensional (3D) television that shows more nat-ural 3D images to viewers without special glasses. We conducted comprehensive research on 3D imaging, which includes image capturing and displaying technologies, coding methods, required system parameters and display device development. In parallel, we made progress in our R&D on multidimensional image representation technology using sensor information that can be used to produce attractive live programs including live sports broadcasting.

In our research on display technologies based on the integral 3D method, we are developing basic tech-nologies to increase the quality of displayed images using multiple display equipment. In FY 2017, we de-veloped a direct-view display system using a new multi-image-combining optical system and a high-den-sity 8K organic light-emitting diode (OLED) display (1,000 ppi or higher), which increased the number of pixels and viewing-zone angle to 54,000 pixels and approximately 20 degrees, respectively. We also pro-totyped an optical synthesis system using three sets of an 8K OLED and lens array to reduce the color moiré caused by a direct-view display system. Moreover, we progressed with our development of a paral-lel projection system capable of the high-density arrangement of multiple high-defi nition projectors as display equipment that can fl exibly extend the number of light rays of 3D images. We verifi ed its effect on improving resolution characteristics using a prototype system consisting of six compact high-defi nition projectors.

We continued to participate in the activities of the MPEG-I Visual ad hoc group, which aims to standard-ize new coding technologies for 3D images. We also conducted subjective experiments applying 3D High Effi ciency Video Coding (3D-HEVC), which is a conventional multiview coding method, to integral 3D im-ages after converting elemental images in the integral images into multi-viewpoint images, which can be compressed by 3D-HEVC.

In our research on image-capturing technologies based on the integral 3D method, we are studying technologies to obtain spatial information by using multiple cameras and lens arrays for creating high-quality 3D images. In FY 2017, we developed a camera array system having 154 cameras arranged in two dimensions and established a basic technology for generating elemental images for integral 3D imag-es from multi-viewpoint images. Using this system, we conducted verifi cation experiments on displaying 100,000-pixel 3D moving images. We also established a capture technology that generates 3D models of an object from images captured by multi-viewpoint 4K robotic cameras and converts them into elemental images. With regard to this technology, we developed a new 3D model generation technology that em-ploys a nonlinear depth-compression expression technology.

In our research on the system parameters of the integral 3D method that we started in FY 2015, we are investigating the relation between the display parameters (the focal length of the lens array, the pixel pitch of the display device, and so on) and image quality (in terms of the depth-reconstruction range, res-olution and viewing zone) through subjective evaluations of simulated integral 3D images. In FY 2017, we evaluated the image quality of nonlinear depth-compressed images displayed by using stereoscopic 3D display equipment that reproduces binocular parallax, motion parallax and spatial frequency characteris-tics. Similarly, we conducted subjective evaluations with an integral 3D display simulator that uses non-linear depth compression technology.

In our research on display devices, we have been studying electronic holography devices and beam-steering devices. For electronic holography, we continued to study spatial light modulators (SLMs) using spin transfer switching (spin-SLM). In FY 2017, we prototyped an active-matrix-driven spin-SLM (1K×1K pixels) using a tunnel magnetoresistance element with a narrow 2-μm pixel pitch and successfully demonstrated the display of 2D images. To improve the performance of an SLM, we proposed and proto-typed a light modulation element driven by current-induced domain wall motion and a voltage-controlled magnetic anisotropy effect device which are capable of low-current operation and we demonstrated their basic operating principles. With the aim of building an integral 3D display that uses beam-steering devices instead of a lens array, we studied an optical phased array consisting of optical waveguides that uses an electro-optic polymer. In FY 2017, we designed and prototyped a 1D optical phased array consisting of eight channels with 10-μm pitch, which achieved a light beam defl ection of ±3.2 degrees.

In our research on multidimensional image presentation technology, we are researching and developing technologies for new image presentation techniques using real-space sensing such as a high-speed ob-ject-tracking system using a near-infrared camera called a sword tracer; a method to estimate the head pose of a soccer player; a sports virtual technology to enable CG synthesis using crane cameras ; a studio robot for joint performance with CGs; Sports 4D Motion, which is a system combining multi-viewpoint im-ages and CGs; a 2.5dimensional multimotion representation technique that enables time-series image presentation; and a naturally enhanced image representation method for a fl ying object to generate the trajectory of a golf ball in real time. In FY 2017, we verifi ed the effectiveness of each basic system through prototyping, fi eld experiments and application to program production and identifi ed issues to be resolved toward practical use.

2 Three-dimensional imaging technology

Page 21: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 19

2 Three-dimensional imaging technology

2.1 Integral 3D imaging technology

■ 3D display technology

We are researching spatial imaging technology as a way of creating a 3D television that reproduces more natural 3D imag-es and does not require special glasses. The integral 3D method can reproduce natural 3D images regardless of the viewer posi-tion by reproducing light rays from objects. In FY 2017, we worked to improve the performance and quality of integral 3D images.

To increase the resolution of 3D images, we have been re-searching a technology for synthesizing screens of multiple 3D images. The conventional screen synthesis technology had an issue of unnaturalness such as discontinuous switching of im-ages in the viewing area of 3D images and at joints between images. To improve the performance of 3D image synthesis, in FY 2017, we proposed a multi-image-combining optical system that expands the effective viewing angle. The proposed system has a structure in which the center of the screens of multiple display panels and the lens optical axis of the optical system are shifted to achieve a larger effective viewing angle. Our pro-totype equipment using a high-density OLED panel with a pixel density in excess of 1,000 ppi (produced by Semiconductor En-ergy Laboratory Co., Ltd.) achieved about 3.2 times the number of pixel of a conventional device (311 pixels horizontal × 175 pixels vertical) and a viewing angle of 20.6 degrees in both the horizontal and vertical directions (Figure 2-1). The depth of the equipment was also reduced to less than half that of the con-ventional equipment(1). We exhibited this prototype at the NHK STRL Open House 2017.

Integral 3D imaging enables thin 3D display equipment be-cause it displays 3D images by adhering a lens array to ele-mental images shown on a direct-view display. This method, however, deteriorates the quality of 3D images due to the color moiré caused by the subpixel structure of the display. To im-prove the quality of 3D images, in FY 2017, we proposed a

technique for reducing the color moiré and increasing the reso-lution of 3D images by synthesizing the screens of multiple 3D images. We fabricated prototype equipment by optically syn-thesizing three 3D display units, each of which has a lens array adhering to an 8K OLED display (produced by Semiconductor Energy Laboratory Co., Ltd.)(2). The control of the color moiré of 3D images appearing on each display and screen synthesis us-ing the shifted elemental lenses of the lens array successfully reduced the color moiré and increased the resolution. We con-fi rmed these improvements in image quality with the prototype equipment and exhibited the equipment at the NHK STRL Open House 2017.

We also developed a parallel projection method that can in-crease the resolution of 3D images and expand the viewing an-gle by using multiple ultra-high-definition projectors. In this method, each projector placed in its optimum position super-imposes projected images onto the lens array at a certain angle by using elemental images as parallel rays. This improves the 3D image resolution characteristics and viewing angle charac-teristics. We developed a compact 4K projectors for 3D displays and prototyped a 3D display optical system and a video signal reproduction device. We also developed a technology to enable automatic and high-precision adjustment of the projection po-sition of elemental images. Our prototype equipment using six 4K projectors achieved 3D images with about twice the resolu-tion of equipment using a single 8K display and a horizontal viewing angle of 31.5 degrees (Figure 2-2)(3).

Figure 2-1. 3D imaging using screen synthesis (a) 3D image synthesis

method (b) Example of reproduced 3D images

(a)

High-density

OLED panel

Lens array

Individual integral

3D images

Gap

Multi-image combining

optical system

Combined integral

3D image

(b)Figure 2-2. 3D imaging by the parallel projection method (a) Prototype

display equipment (b) Examples of reproduced 3D images from various

viewpoints

(a)

Collimator lens and lens array

Projector array

(b)

Left view Right view

Lower view

Upper view

Center view

Page 22: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

20 | NHK STRL ANNUAL REPORT 2017

2 Three-dimensional imaging technology

■Coding technologies for 3D images

We are researching coding technologies for elemental imag-es used in the integral 3D method. In FY 2017, we investigated the relationship between the number of viewpoints and the quality of integral 3D images to identify 3D information neces-sary for integral 3D images. We conducted an experiment in which we generated depth images from multi-viewpoint imag-es, reduced the number of viewpoints and coded the images by 3D High Effi ciency Video Coding (3D-HEVC). We generated and interpolated viewpoints necessary for 3D displays after decod-ing signals and conducted subjective evaluations of the image quality of displayed 3D images. The results of an experiment using still images demonstrated that 3D images with accept-able image quality were displayed even when the number of viewpoints was reduced from 484 to 64 or less, which is more than an 80% reduction(4). We also continued to participate in standardization activities at MPEG and contributed to the standardization of multiview video.

■Technologies for capturing spatial information

In image capturing by the integral method, it is necessary to capture information on the directions and colors of light rays propagating through the air; such information is called spatial information. It is therefore important to study ways of captur-ing information of high-density light rays to improve the quality of 3D images. For this research subject, we are investigating a method for capturing spatial information from multiview video captured by multiple cameras. In FY 2016, we displayed integral 3D still images using a technique to capture various light rays by automatically moving a single camera up and down and from side to side. In FY 2017, we developed a new camera ar-ray system consisting of 154 cameras (14 cameras horizontal × 11 cameras vertical) (Figure 2-3) so that light rays propagating in the air can be captured instantaneously. This system also makes it possible to capture information on light rays propa-gating from a moving object. We verified the effectiveness of the system by displaying the moving images of 100,000-pixel integral 3D images based on the light ray information obtained from multiview video captured by this system (Figure 2-4)(5).

As a way of capturing spatial information without using a lens array, we are also studying a technology for generating 3D models of an object from multiview video and converting them into elemental images. We exhibited the technology at the NHK STRL Open House 2017. We also developed a new 3D model generation technology that supports a nonlinear depth-com-pression expression technology(6). Moreover, we employed our multi-viewpoint robotic cameras with 4K resolution for captur-ing multiview video. The cameras captured higher-density spa-

tial information and further improved the quality of integral 3D images.

■System parameters

In FY 2015, we began a study aimed at determining system parameters that can serve as guidelines for designing integral 3D imaging systems. In the integral method, the spatial fre-quency of a displayed 3D image deteriorates sharply when dis-playing deep 3D scenes that have greater depth than the dis-play’s depth-reconstruction range, which is determined by display parameters such as the pixel pitch of display equipment and the focal length of lenses. In FY 2016, we developed a depth-compression expression technology that compresses the depth of 3D scenes and performs a no-blur and high-quality 3D visualization without including a sense of unnaturalness in viewers by taking advantage of the characteristics of human depth perception. Evaluation results showed that the amount of unnaturalness was acceptable even when a 3D scene with a depth in excess of 100 m was compressed into a depth range of 1 m. On the basis of these results, in FY 2017, we studied the image quality of integral 3D images generated from depth-compressed images by using stereoscopic 3D display equipment that reproduces the spatial frequency characteristics of integral 3D images in addition to binocular parallax and mo-tion parallax. We conducted subjective evaluations using an in-tegral 3D display simulator that compresses a 3D scene into a depth range of 1 m. By evaluation of the blurring of 3D images simulated with various pixel pitches of the display equipment, we derived a pixel pitch which is not strongly affected by the deterioration of the spatial frequency of 3D images.

[References](1) N. Okaichi, H. Watanabe, H. Sasaki, J. Arai, M. Kawakita and T.

Mishina: “Performance improvement of parallel-type integral 3D dis-play,” Proc. ITE Annual Convention 2017, 34D-1 (2017) (in Japanese)

(2) H. Sasaki, N. Okaichi, H. Watanabe, M. Kano, M. Miura, M. Kawakita and T. Mishina: “Color Moiré and Resolution Analysis of Multiple In-tegral Three-Dimensional Displays,” Proc. IDW’17, 3D5-4L, pp.860-863 (2017)

(3) H. Watanabe: “Integral 3D Display Technology,” NHK Science & Technology Research Laboratories Broadcast Technology, No. 70, p. 22 (2018)

(4) K. Hara, H. Watanabe, M. Kano, M. Katayama, T. Fujii, M. Kawakita and T. Mishina: “Coding Performance of Integral 3D Images Using Multiview Images with Depth Map,” Proc. IDW’17, 3Dp1-14L, pp. 910-911 (2017)

(5) M. Kano, H. Watanabe, M. Miura, K. Hisatomi, M. Kawakita and T. Mishina: “Development of Camera Array System for Integral Imag-ing,” Proc. IEICE General Conference, D-11-5, p. 5 (2018) (in Japa-nese)

(6) Y. Sawahata and T. Morita: “Estimating Depth Range Required for 3D Displays to Show Depth-Compressed Scenes without Inducing Sense of Unnaturalness,” IEEE Transactions of Broadcasting, Vol. 64, No. 2, pp. 488-497 (2018)

Approx. 260cm

Approx. 200cm

Figure 2-3. Camera array system

Figure 2-4. Examples of reproduced integral 3D images (left: from center,

right: from each viewpoint)

Left view Right view

Upper view

Lower view

Page 23: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 21

2 Three-dimensional imaging technology

2.2 Three-dimensional imaging devices

■Ultra-high-density spatial light modulator

We are researching electronic holography with the goal of realizing a spatial imaging 3D television that shows natural im-ages. Displaying 3D moving images in a wide viewing zone re-quires a spatial light modulator (SLM) having a very small pixel pitch and an extremely large number of pixels. We are develop-ing an SLM driven by spin transfer switching (spin-SLM) as a display device with a narrow pixel pitch.

The pixels of the spin-SLM are composed of magnetic mate-rials. The spin-SLM can modulate light by using the magne-to-optical effect, in which the polarization plane of reflected light rotates according to the magnetization direction of the magnetic materials. We previously developed a tunnel mag-netoresistance (TMR) light modulation device that can switch the magnetization direction at a low electric current and used the device for an active-matrix driving spin-SLM with a pixel pitch of 2 μm and 100×100 pixels to display 2D images(1).

In FY 2017, we worked to increase the number of pixels and prototyped a spin-SLM with a pixel pitch of 2 μm and 1,000×1,000 pixels, which successfully displayed 2D images(2) (Figure 2-5). We also worked to develop a light modulation ele-ment driven by current-induced domain wall motion that has a novel structure to achieve a smaller pixel pitch and a higher number of pixels and successfully verifi ed the basic principle of current-induced domain wall motion in magnetic nanowires made of a gadolinium-iron alloy(3). As a future technology for reducing power consumption, we focused on the voltage-con-trolled magnetic anisotropy effect. Using the gadolinium-iron alloy, we successfully controlled the magnetic characteristics of a 9-nm-thick fi lm, which is several times as thick as the funda-

mental limit, by applying an electric fi eld(4). We plan to develop a spin-SLM using these novel structures.

■Optical phased array

For a future integral 3D display with much higher perfor-mance than current displays, we are conducting R&D on a new beam-steering device that can control optical beams from each pixel at a high speed without using a lens array. Such a device would enable the reproduction of 3D images having both a wide viewing zone and high resolution. Focusing on an optical phased array (OPA) consisting of multiple optical waveguides (channels), we are currently designing, fabricating and evaluat-ing an OPA element using an electro-optic (EO) polymer that can control the refractive index at a high speed by applying an external voltage.

An OPA using an EO polymer can fl exibly change the shape and deflection direction of output light beams by applying a voltage via the channels and controlling the light phase by us-ing the changes in the refractive index of the EO polymer. We previously designed and prototyped an OPA consisting of eight channels and demonstrated its basic operating principle.

In FY 2017, we enhanced the EO coeffi cients of an EO poly-mer element and dramatically improved the phase control per-formance by modifying the poling process, which is necessary for the fabrication of the element. The poling process is an ori-entation technology for aligning the molecular directions in an EO polymer by applying a high electric fi eld to the highly heated element. As a result of the modification, our new prototype OPA with a beam output channel waveguide pitch of 10 μm successfully demonstrated a light beam defl ection of ±3.2 de-grees and linearity of the phase shift with the voltage(5)-(7) (Fig-ure 2-6). This research was conducted in cooperation with the National Institute of Information and Communications Tech-nology (NICT).

[References](1) K. Aoshima, H. Kinjo, N. Funabashi, S. Aso, D. Kato, K. Machida, K.

Kuga and H. Kikuchi: “Current induced domain wall movement of magnetic wires with various compositions of Gdx-Fe1-x alloy,” Proc. 41st Annual Conference on Magnetics in Japan, 21pA-13 (2017) (in Japanese)

(2) N. Funabashi, H. Kinjo, K. Aoshima, D. Kato, T. Usui, S. Aso, K. Kuga, T. Mishina, K. Machida, T. Ishibashi and H. Kikuchi: “Magneto-Optical Spatial Light Modulator Driven by Spin Transfer Switching for Elec-tronic Holography,” Proc. IDW’17, AMD8-2, pp.400-403 (2017)

(3) R. Ebisawa, K. Aoshima, D. Kato, N. Funabashi, K. Kuga, Y. Akiyama, and K. Machida: “[TO BE CHECKED],” NHK STRL R&D, No.166, pp.28-38 (2017) (in Japanese)

Figure 2-5. (a) 2D spin-SLM prototype with 2-μm pixel pitch and

1,000×1,000 pixels (b) 2D image displayed

100μm(a) Spin-SLM prototype

(b) 2D image displayed

Figure 2-6. Basic structure of OPA device and output far-field beam patterns

Input light beam

Electrode terminal

Beam output channel waveguides

Light beam deflection

Beam splitter

Phase shifters

Far-field beam patterns

3.2°0°(Maximum

deflection angle)

Page 24: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

22 | NHK STRL ANNUAL REPORT 2017

2 Three-dimensional imaging technology

Figure 2-7. Display of the sword tip trajectory at the 70th All Japan Fencing

Championships

Figure 2-8. Application example of the synthesis of ball CGs with real

images in volleyball (a ball CG is superimposed on the actual ball position

and its trajectory.)

2.3 Multidimensional image representation technology

With a view to realizing new image representation that makes live sports coverage and other live programs more us-er-friendly and interesting, we are conducting R&D on multidi-mensional image representation technologies using video in-formation, spatial information and sensor information in addition to conventional 2D image analysis.

■New image representation technique using real-space sensing

We made progress in researching an image representation technology using real-time sensing that expands the degree of freedom in image representation by utilizing object information obtained from camera images and sensors.

We are investigating a technology for obtaining the informa-tion (such as the position and direction) of a specific object from sports scenes to represent the movements of a moving object in sports in a more comprehensible way. In FY 2017, we developed a “sword tracer” system that visualizes the compli-cated and high-speed movements of the tip of a sword in fenc-ing in cooperation with the Broadcast Engineering Department. This system traces the refl ected light from the sword tip on in-frared images by using a camera that can capture visible imag-es and near-infrared images on the same optical axis and over-lays the trajectory CGs on visible images. It is capable of the high-precision and real-time tracing of a sword tip by using machine learning. We used the sword tracer system for broad-casting for the first time at the All Japan Fencing Champion-ships held in December 2017 (Figure 2-7).

For the purpose of commenting on game strategies, we re-searched a method to estimate the head pose (eight directions) of a player from a low-resolution face image in soccer game footage captured with a wide-angle lens(1). In FY 2017, we de-veloped a system for managing and modifying data to increase the operability and demonstrated the effectiveness of the method through its use in sports commentary programs. The method helped visually and comprehensibly explain plays that take advantage of a dead angle by indicating a player’s fi eld of vision with a fan-shaped mark. This was well received by viewers. We exhibited this research result at the NHK STRL

Open House 2017.Sports virtual technology that synthesizes CGs generated ac-

cording to the camera attitude (such as the position and direc-tion) with sports scenes is becoming increasingly common in live sports coverage. We are developing a new camera attitude sensor that can be installed in various capture equipment such as crane cameras and wire cameras and easily obtain the cam-era parameters necessary for this sports virtual technology. In FY 2017, we prototyped a small and high-precision attitude sensor that effectively combines a micro-electro-mechani-cal-systems (MEMS) gyroscope, an optical fi ber gyroscope, a la-ser range sensor and a constant-tension spring mechanism, and conducted measurement experiments. The results showed that the sensor can reduce the drift error to 1/65 that of a con-ventional method.

For studio video production, we are developing a studio ro-bot that enables natural joint performance between live per-formers and CG characters. Our conventional technique had a problem that part of the robot could be seen behind the CG character superimposed on the robot position in the captured video. To address this problem, in FY 2017, we developed a ba-sic technology to naturally hide the robot area in the captured video by using the result of a comparison between studio back-ground information (e.g., 3D structure and images) obtained in advance and the video in which the robot is captured. We plan to improve the operability and conduct demonstration experi-ments in programs.

■Sports 4D Motion

To make sports scenes, especially live sports broadcasts of ball games, more comprehensible to viewers, we developed “Sports 4D Motion,” a new image representation technology that combines multiview video and CGs.

In FY 2017, we studied elemental technologies such as a camera calibration technology for multiview camera systems supporting zoom lenses as well as conventional panning and tilting control(2), an object-tracking technology that can be used even on a complicated background and a 3D information anal-

(4) N. Funabashi, H. Kinjo, T. Ueno, S. Aso, D. Kato, K. Aoshima, K. Kuga, M. Motohashi and K. Machida: “Voltage-Controlled Magnetic Anisotropy in Tb-Fe-Co/MgO/Gd-Fe MTJ Devices,” IEEE Trans. Magn., Vol. 53, No. 11, p.4003304 (2017)

(5) Y. Hirano, Y. Motoyama, K. Tanaka, K. Machida, T. Yamada, A. Oto-mo and H. Kikuchi: “Optical phased arrays using electro-optic poly-mer waveguides,” Proc. 9th International Conference on Molecular Electronics and Bioelectronics, P-B-041 (2017)

(6) Y. Hirano, Y. Motoyama, K. Tanaka, K. Machida, T. Yamada, A. Oto-mo and H. Kikuchi: “Beam Defl ection on Optical Phased Arrays with Electro-optic Polymer Waveguides,” Proc. 2017 IEEE Photonics Con-ference, MF4.5 (2017)

(7) Y. Hirano, Y. Motoyama, K. Machida, K. Tanaka, T. Yamada, A. Oto-mo and H. Kikuchi: “Beam Steering Characteristics of an Optical Phased Array with Electro-Optic Polymer Waveguides,” Proc. ITE Winter Annual Convention, 24C-4 (2017) (in Japanese)

Page 25: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 23

2 Three-dimensional imaging technology

ysis technology. By combining these technologies, we enabled a moving camera to obtain the position of an object such as a ball in real time and with high precision(3). This makes it possi-ble to synthesize CGs with real images accurately using zoom-ing by broadcast cameras without restricting production effects (Figure 2-8). We also increased the resolution of our multiview camera system by using 4K cameras instead of HD-resolution cameras and improved the quality of generated multiview vid-eo. Moreover, we established a Sports 4D Motion system that synthesizes CGs of a ball trajectory and other information with multiview video of a focused object by integrating elemental technologies that we developed. We used these technologies at the IBC 2017 exhibition and in programs such as “Sports Inno-vation” and “NHK ABU Robot Contest” and demonstrated their excellent real-time capability, synthesis accuracy and practical-ity. We plan to expand the variety of sports which these tech-nologies can be applied to and utilize the system for live sports coverage and commentary programs.

■ 2.5dimensional multimotion representation technique

We are researching multimotion representation in live sports broadcasting that shows a series of motion in the form of serial photographs by generating an athlete area image extracted from the captured image and placing it as a CG billboard in a background 3D CG image of the stadium. In FY 2017, we modi-fi ed the object cutout technique (Active GrabCuts) and also de-veloped an object region extraction technique that uses both visible images and far-infrared images. This technique enables the high-speed and accurate cutout of an object that is diffi cult to extract with image information alone. We also applied the object extraction technology and 3D position measurement technology, which are elemental technologies of the 2.5D mul-timotion representation technology, to the trajectory drawing of a fl ying object at NHK Robot Contests held in June and Au-gust 2017. These technologies helped produce comprehensible image representation by visualizing the trajectories of multiple disks that robots throw concurrently(4).

■Naturally enhanced image representation of a fl ying object

The real-time CG representation of the trajectory of a fl ying object such as a fast-moving ball is effective for easy-to-under-stand commentary in live sports broadcasting. Assuming its use for golf events, which are diffi cult to measure with conven-tional technologies, we are developing a technology for meas-uring the position of a golf ball in real time by combining tra-jectory estimation equations, image processing technologies and laser sensors. In FY 2017, we implemented a function to accurately display the trajectory of a ball until it lands in addi-tion to the trajectory of a tee shot by analyzing images cap-tured by two auto-tracking sensor cameras installed on the landing side of the ball. We conducted demonstration experi-ments on the system at the fi nal qualifi cation rounds of Japan Women’s Open Golf Championship and confi rmed its effective-ness(5). We exhibited this system at the NHK STRL Open House 2017 and also at gallery plazas for three golf championships aired live by NHK.

[References](1) S. Yokozawa et al.: “Head Pose Estimation for Football Videos by Us-

ing Fixed Wide Field-of-View Camera,” Proc. of International Confer-ence Pattern Recognition Systems (ICPRS-17), p. 7 (2017)

(2) M. Kano et al.: “Accurate Calibration of Multiple Pan-Tilt Cameras for Live Broadcast,” 5th International Conference on 3D Vision (3DV 2017), pp.594-602 (2017)

(3) H. Okubo et al.: “Prototyping of Live Sports Graphics System Equipped with Object Tracking Function -Real-Time 3D Ball Tracking Utilizing Multiple PTZ Cameras-,” ITE Technical Reports, Vol. 41, No. 26, ME2017-85, pp.9-12 (2017)(in Japanese)

(4) H. Moroika, S. Yokozawa and H. Mitsumine: “Development of a sys-tem drawing trajectories of fl ying objects for live TV programs,” Proc. of ITE Winter Annual Convention, 12C-2 (2017)(in Japanese)

(5) D. Kato and H. Mitsumine: “A study of 3D coordinate measurement and a track indication method of a flying object for broadcasting use,” Proceedings of the 17th SICE System Integration Division An-nual Conference, 3A1-04 (2017)(in Japanese)

Page 26: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

24 | NHK STRL ANNUAL REPORT 2017

3.1 Content provision platform

We are researching technologies for making use of the inter-net to provide more user-friendly and convenient broadcasting services. In FY 2017, we designed services assuming multiple media and service providers in our research on a media-unify-ing platform that enables the easy viewing of content at any time without considering the differences in delivery paths and viewing terminals. We also studied ways to achieve diverse viewing styles in IP content delivery and increase responsive-ness. Moreover, we contributed to domestic and international standardization activities including the specifi cation of a multi-media content exchange format toward the start of new 4K/8K satellite broadcasting in December 2018.

■Media-unifying platformAs TV programs are delivered not only by broadcasting but

by diversifying media on the internet, a growing number of people view programs on non-TV receiver terminals such as

3 Internet technology for future broadcast services

We continued our research on how to use the internet to provide “public media” adapted to diverse viewer environments.

In our research on a media-unifying platform, we investigated a system that automatically selects ap-propriate media according to the user situation in response to the diversifi cation of distribution media and viewing terminals of TV programs. We implemented the system on smartphones and verifi ed its effective-ness through user evaluation experiments. We also devised a link description scheme to specify content regardless of the type of distribution media and designed and verifi ed a system in which multiple servers and operators cooperatively return the distribution information of a program specifi ed by the link.

In our work on video distribution technologies, we developed a streaming technology that allows smooth switching between different viewpoints of multi-viewpoint camera images through the internet and a server-side rendering technology that enables the TV viewing of 360-degree camera images. We also implemented live video support to the transmission rate control technology that ensures stable video distribution even if the number of viewing terminals increases.

In preparation for the start of SHV satellite broadcasting, we contributed to the establishment and revi-sion of the ARIB standard on multimedia content exchange formats and character coding schemes.

In our work on service linkage technologies, we researched a content-matching technology to provide new TV experiences by connecting TV viewing to daily activities. To enable linkage services for Hybridcast TV, we studied a device linkage architecture, developed and exhibited service instances in cooperation with commercial broadcasters and manufacturers, and promoted standardization at the IPTV Forum Ja-pan. We also investigated a technology to interact with Internet of Things (IoT)-enabled devices and de-veloped an intermediary function to connect with voice assistant devices.

To promote Hybridcast both in Japan and overseas, we helped conduct a performance test event and studied ways of enabling the cross-execution of applications on both Hybridcast and Europe’s HbbTV sys-tem.

In our work on program information utilization technology, we continued with our study on data struc-turing. We built “Rika (Science) Map”, which presents video content systematized on the basis of govern-ment curriculum guidelines for science in elementary and junior high schools, and conducted demonstra-tion experiments on the educational effects of viewing high school programs together with related video.

In our research on a TV-viewing robot that enjoys watching TV with the viewer, we fabricated prototype equipment using an experimental communication robot and developed a function to detect the directions of the TV and viewer and a function to generate utterances related to the program.

In our work on augmented reality (AR) technology, we studied a new viewing experience with “Virtual TV” that presents virtual 2D images in a real space, expecting the widespread use of glasses-shaped AR devices.

In our work on security technologies, we continued with our research on a cryptography scheme that enables service providers to view and search for encrypted viewer information for personalized services by using the service provider’s attributes without decrypting them on the cloud. We implemented the scheme on a PC and verifi ed its operation. We also strengthened the security of a digital signature scheme and studied an update method for a scrambling scheme. Moreover, we conducted a survey on the cyber security measures of broadcasters.

Figure 3-1. Example of a media-unifying platform implemented on actual

equipment

Page 27: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 25

3 Internet technology for future broadcast services

smartphones and tablets. Against the backdrop of this diversifi -cation of program-viewing methods, we are researching a “media-unifying platform” that automatically selects appropri-ate media for viewing programs in accordance with the user situation.

The media-unifying platform consists of servers that control the program distribution status of each medium and me-dia-unifying engines that automatically select appropriate me-dia on the terminal side. We implemented a media-unifying en-gine that we had prototyped and verified and a software component necessary for running the engine on commercial

smartphones on a trial basis in cooperation with mobile service providers. Using the smartphones, we exhibited how programs are selected and viewed through appropriate means according to the viewer environment such as at home or outdoors at the NHK STRL Open House 2017 (Figure 3-1).

We also demonstrated that the media-unifying platform ena-bles interaction with a TV program recorder and a smooth transition of viewing from a smartphone to a TV according to the viewing environment at home.

To assess the reception quality of broadcasting and the inter-net in various living environments, we developed a portable

- Store-linked ads

- Broadcast

- Internet simulcast

- VOD

- Broadcast

- Internet simulcast

- VOD

- Store-linked ads

- Broadcast

- Internet simulcast

- VOD

- Broadcast- Online redistribution of broadcasts

- Broadcast

- Internet simulcast

- VOD

- Video recorder

- Remote viewing

Store

Terminal A

Terminal B

Home

- Video recording

- Remote viewing

Terminal DEvacuation center

Terminal C

Store

Private

Home

- Broadcast- Online redistribution of broadcasts

Evacuation center

Ad hoc

Public - Broadcast

- Internet simulcast

- VOD

Figure 3-2. Example of an interaction model with multiple servers

Viewing device

Media-unifying

engineGateway function

Public control

server

Operator A

serverAPI

Proposed model

Content

identification

function

Operator B

server

API

Operator C

serverAPI

Operator D

serverAPI

Operator E

serverAPI

7. Content identification

6. List of distribution status

5. Query to multiple operator servers

8. List of distribution status4. Content identification query

9. List of distribution status

3. Response of distribution status

2. Query to the operator server that has the link parameters

1. Link parameters

Figure 3-3. Example of a program resolution model using multiple operator servers

Page 28: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

26 | NHK STRL ANNUAL REPORT 2017

3 Internet technology for future broadcast services

device for measuring the reception status of broadcasting and communications and conducted measurements in actual living environments such as on a train and bus and on foot(1). The re-sults, along with the delivery models, were incorporated into ITU-R BT.2400, a report on a global platform.

In FY 2017, we designed each functional element of the me-dia-unifying platform and verifi ed the functions. First, we pro-posed a link description scheme that can specify content re-gardless of the type of distribution media and a method for generating the links. We built an experimental environment where content is delivered by broadcasting, internet simulcast and VOD and verifi ed the link generation and resolution mod-els that we proposed(2).

Regarding the program distribution status control servers that provide program distribution information in response to a request from a receiver terminal, we prototyped and verifi ed an experimental system using a model in which multiple servers with different purposes and targets operate cooperatively (Fig-ure 3-2). This model provides appropriate viewing content ac-cording to the user lifestyle and usage situation by the com-bined use of a public server to process the program information common to all users, a private server to process the program information that varies with the user and program recorder, and an ad hoc server to process the information of programs which can be viewed to limited areas such as a shopping mall and an evacuation site(3).

For the program distribution status control servers, we stud-ied a system model considering a more practical service style and prototyped a system that refers to the program distribution information of the operator who distributes a certain program and generates and returns the distribution status (Figure 3-3). We demonstrated that this system can obtain the distribution information from multiple operator servers in response to a re-quest from a receiver terminal by using a gateway function and content identification function incorporated into the public control server(4).

To assess the effectiveness of the media-unifying platform, we asked general users to evaluate the platform in December 2017. First, we asked 1,000 users to respond to an online ques-tionnaire and collected data on the genres of video they view, the equipment they use, their viewing media and how they share video information with others. On the basis of the results, we asked 50 participants to evaluate our prototype system. The results showed a huge reduction in the time required for select-ing and viewing a program and strong user demand for a me-dia-unifying platform.

■Multiview streaming technologyAs a video delivery technology for new user experiences, we

developed a multiview streaming technology that delivers im-ages captured by multi-viewpoint robotic cameras on the inter-net and allows users to view them on their PC and smartphone

browsers while switching the viewpoints smoothly. This tech-nology delivers high-quality camera images from each view-point and a screen of thumbnails of the images from all view-points for viewpoint selection operation in MPEG-DASH (Dynamic Adaptive Streaming over HTTP) format and switches between the two types of images according to the user opera-tion. This enables effi cient distribution and smooth viewpoint switching.

We also prototyped an experimental system consisting of a distribution system for the live streaming of images captured by multiple cameras and a viewing application implemented by Web standard technologies such as JavaScript and CSS.The prototype system demonstrated smooth viewpoint switching in accordance with the user’s touch operation on the PC and smartphone web browser(5). We exhibited a prototype system that uses images captured by nine multi-viewpoint robotic cameras at the NHK STRL Open House 2017 (Figure 3-4).

■Server-side rendering technologyWe developed a server-side rendering technology that ena-

bles the viewing of 360-degree images, which require advanced image processing, on terminals with a limited CPU and memo-ry such as TVs.

This technology enables interactive content viewing on a TV by sending the information of TV remote controller operation to a server on the cloud and transmitting the images rendered on the server in accordance with the operation information to the TV at low latency using web real-time communication (WebRTC) technology.

We fabricated an experimental system using this technology and measured the time it takes from a remote controller opera-tion to the response confi rmed on TV. The results demonstrated that a response time of 0.1 second or less can be achieved by adjusting the resolution and other parameters of rendered im-ages(6). We exhibited a prototype system that uses a 360-degree camera at the NHK STRL Open House 2017 (Figure 3-5).

■Transmission rate control for video distributionWe developed a method for carefully controlling video trans-

mission rates in accordance with the congestion status of dis-tribution servers as a way of ensuring the stable quality of live video distribution even if the number of accessing terminals in-creases. In FY 2017, we adapted the transmission rate control technology that we previously developed to live video inputs. We built a prototype distribution system equipped with this method in a cloud server environment and verifi ed the opera-tion by feeding live video to the prototype system via the inter-net. The results showed that this method can suppress fl uctua-tions in image quality and provide video with a stable quality compared with conventional methods, even when an increase in the number of viewing terminals causes congestion on dis-

Figure 3-4. Viewing screens of multiview streaming

(Viewpoint switching by swiping the screen)Figure 3-5. TV viewing of 360-degree images by server-side rendering

technology

Page 29: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 27

3 Internet technology for future broadcast services

3.2 Service linkage technologies

We are researching service linkage technologies that will of-fer new TV experiences in various scenes of daily life by taking advantage of latest IT technologies such as the internet, IoT-enabled devices and augmented reality (AR) and by linking TV with various services in society.

■System for linking TV viewing with daily activitiesWith the aim of realizing more convenient broadcast-broad-

band services that connect TV viewing to internet services and also to daily activities in the real world, we are conducting R&D on a content-matching technology, which is a key tech-nology for linking broadcasting with various applications and IoT-enabled devices.

In FY 2017, we proposed extending the Hybridcast device linkage architecture so that viewers can access broad-cast-broadband services safely and securely by an easy opera-tion such as via smartphone applications. We also promoted the standardization of a device linkage protocol that enables Hybridcast applications to be started from smartphones and other terminals at the IPTV Forum Japan. At the NHK STRL

tribution servers(7).

■Promotion of video distribution for TVWe continued with our work to promote video distribution

for TV by using Hybridcast and other means(8). We previously developed a video player under the name of “dashNX” and made it available to the members of the MPEG-DASH interop-erability study group at the IPTV Forum.

In FY 2017, this player was utilized in 4K delivery experi-ments involving many broadcasters as part of demonstration projects by the Ministry of Internal Affairs and Communications and in large-scale commercial services, establishing a good track record of practical use.

To ensure the implementation of the functions compliant with the IPTV Forum Japan’s VOD operation guidelines into TV receivers and standardize their operation, we developed an op-eration verifi cation environment for determining the conformity with the guidelines and made it available to the Forum.

■Standardization activities for SHV multimedia broadcasting and hybrid systemsThe regular service of new 4K/8K satellite broadcasting is

scheduled to start on December 1, 2018. We contributed to the establishment and revision of the ARIB standard on ideograph-ic variant selectors for multimedia content exchange formats and character coding schemes. We also contributed to the in-corporation of cases of use of second screens and the addition-al description of device linkage operation in a revision of ITU-R Report BT.2267, which describes hybrid systems.

At the technical committee of the Asia-Pacifi c Broadcasting Union (ABU), we promoted the establishment of a require-

ments recommendation for closed captions in digital TV and reported on domestic and international trends of the multime-dia schemes, CAS technology and security in broadcasting. Through these activities, we tried to increase recognition of Ja-pan’s broadcasting systems and Hybridcast.

[References](1) H. Endo, S. Taguchi, K. Matsumura, K. Fujisawa and K. Kai: “Broad-

cast and Broadband Reception Quality Field Experiment to Validate the Effectiveness of Media-Unifying Platform,” IEEE BMSB, 8C-1 (2017)

(2) S. Taguchi, H. Endo, S. Takeuchi, K. Fujisawa and K. Kai: “Generating Method of Content Link on Media-Unifying Platform,” ITE Annual Convention, 11C-4 (2017) (in Japanese)

(3) H. Endo, S. Taguchi, S. Takeuchi, K. Fujisawa and K. Kai: “Distribution Status Management System for Media-Unifying Platform,” ITE Winter Annual Convention, 15C-2 (2017) (in Japanese)

(4) S. Taguchi, H. Endo, S. Takeuchi, K. Fujisawa and K. Kai: “Content Providers Coordination Model for Media-Unifying Platform,” ITE Winter Annual Convention, 15C-4 (2017) (in Japanese)

(5) S. Sekiguchi, K. Ikeya, M. Kurozumi and S. Nishimura: “A Develop-ment of Multi-View Streaming System Using MPEG-DASH,” ITE An-nual Convention, 22D-1 (2017) (in Japanese)

(6) M. Onishi, K. Fujisawa and K. Matsumura: “Prototyping of We-bRTC-based Server Side Rendering System on Hybridcast,” IEICE So-ciety Conference, B-6-8 (2017) (in Japanese)

(7) M. Kurozumi, S. Tanaka, S. Nishimura and M. Yamamoto: “Verifi ca-tion of Live Video Delivery System by Bit Rate Control According to Congestion Situation of Delivery Server,” ITE Annual Convention, 12C-3 (2017) (in Japanese)

(8) S. Nishimura: “Technologies supporting video streaming for TV ter-minals MPEG-DASH & MSE/EME,” ITE Journal, Vol. 71, No. 6, pp. 792-796 (2017) (in Japanese)

Figure 3-6. Example of service linking broadcasting with IoT

Figure 3-7. Example of the use of viewing information in linkage between

broadcasting and automobile

Page 30: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

28 | NHK STRL ANNUAL REPORT 2017

3 Internet technology for future broadcast services

Open House 2017, we exhibited about ten service cases that use prototype systems supporting the device linkage architec-ture which we designed and fabricated in FY 2016, demonstrat-ing the effectiveness of this architecture(1). We also actively pro-moted this architecture domestically and internationally through exhibitions such as IBC, CEATEC and InterBEE. Moreo-ver, we conducted an evaluation experiment with about 1,000 participants to verify the effectiveness of the TV viewing func-tion started from smartphones. The results showed that there is clear demand from viewers for such a function(2)(3).

On the basis of this architecture, we are also researching the utilization of IoT-enabled devices, which have been widely used in recent years (Figure 3-6). In FY 2017, we investigated a system model that provides services by easily connecting broadcasters’ data and content with IoT-enabled device(4)(5). We also developed an intermediary function to connect the func-tions of voice assistant devices with the device linkage archi-tecture and prototyped an application that enables the linkage between broadcasting services and various other services re-gardless of the type of voice assistant device(6).

We are researching how TV program viewing can produce new discoveries and values not only in the living room but also in various scenes in daily life by using personal data such as viewing information and behavior information. In FY 2017, we prototyped and exhibited services that connect broadcasting and driving behaviors by utilizing a user’s program-viewing logs(7) (Figure 3-7). Such services include presenting the user with information of tourist spots related to watched programs on the head-up display (HUD) when he/she is driving and help-ing the driver stay awake by giving a quiz on the topics of the programs he/she viewed on the previous day. We also studied a system that presents program-related information in accord-ance with user behaviors through personal data utilization un-der the user’s initiative(8). In addition, we contributed to the preparation of draft technical specifi cations for “Guidelines for describing and using viewing data” that the IPTV Forum Japan began formulating in order to allow broadcasters and service providers to share and use program viewing data.

■Promotion of HybridcastWe are also working to resolve technical issues of Hybridcast

to promote the widespread use of its services. As with the pre-vious year, we conducted a performance test event with the IPTV Forum Japan to evaluate the performance of Hybridcast browsers that run on TV receivers. This event served as a good opportunity for broadcasters and receiver manufacturers to share information about the current situation and issues of re-ceiver performance to improve services and receiver perfor-mance.

To promote Hybridcast services globally, we continued with our study on ways of enabling the cross-execution of applica-tions on both Hybridcast and HbbTV, which was developed and is mainly deployed in Europe. We previously developed equip-ment that generates equivalent applications for both. In FY 2017, we studied extending the functionality to include the sup-port for video distribution in European format. We also con-ducted a survey in Europe on the current situation of HbbTV services and exchanged opinions with our European counter-parts about future interoperability.

■Program information utilization technologyWe are researching ways to help users understand the con-

tent of programs by using data-structuring technology of pro-gram-related information that enables easy interaction be-tween broadcast programs and various internet services. In FY 2017, we developed a mechanism for systematizing and relat-ing video content based on the structured data generated from the government curriculum guidelines for science in elementa-ry and junior high schools. This technology was employed in the NHK Online web pages for “Rika (Science) Map,” which was launched in April 2016. We also conducted demonstration experiments with the Program Production Department at three

correspondence-course high schools to verify the effectiveness of providing related content of different genres together to pro-mote the viewer’s understanding of the content. The experi-ments evaluated the educational effects of viewing NHK High School Programs along with news and general programs on students in terms of the promotion of their understanding and the increase in their motivation to learn.

■TV-watching robotWe are researching a TV-watching robot that serves as a

partner to enjoy watching TV at home. In FY 2017, we imple-mented a function to automatically detect the directions of the TV and viewer and a function to generate utterances automati-cally from subtitles of TV programs into our experimental com-munication robot. We fabricated and exhibited prototype equipment(9) at the NHK STRL Open House 2017 (Figure 3-8).

In preparation for an experiment on the effect of having the robot around during TV viewing, we developed an experimen-tal system for remote-controlling the robot’s utterances and gestures and built a TV-viewing experimental room simulating a living room. We evaluated the performance of the function to detect the directions of the TV and viewer in the vicinity from images captured by the camera mounted on the robot in the living room. The results identifi ed problems such as the degra-dation of the accuracy of detecting the position of the person when images of the person’s profi le and the back of the head are used and the need to improve the detection accuracy and robot control to align the robot with the person’s lines of sight(10).

We continued with our development of an utterance genera-tion technology to realize a function to enable the robot to speak spontaneously in relation to the program being viewed. In FY 2017, we developed an algorithm for generating an utter-ance text even when the robot does not know a keyword con-tained in subtitles by predicting and using a close keyword based on the vocabulary that the robot has already acquired.

A communication robot connected to the internet is an ex-ample of an IoT-enabled device and may handle personal and confi dential data. This requires information security measures for such robots. Furthermore, an autonomous robot with artifi -cial intelligence (AI)-based learning capability may have an im-pact on people’s ethics. We therefore began examining devel-opment guidelines, considering measures to ensure information security and prevent an adverse infl uence on peo-ple. In FY 2017, we investigated the domestic and international trends in guidelines for robots and AI and organized discussion points by taking our TV-watching robot under R&D as an ex-ample(11).

■Viewing experience using AR terminal“AR glasses” are a glasses-shaped AR display using AR tech-

nology that can place virtual objects in a real 3D space and are expected to be widely used in the near future. We began re-searching the use of AR glasses for TV viewing. We investigat-ed a new viewing experience using “Virtual TV,” which is 2D video displayed on a virtual TV placed in the real space that the

Figure 3-8. TV-watching robot

Page 31: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 29

3 Internet technology for future broadcast services

user wearing AR glasses is viewing. Virtual TV provides a high-er degree of freedom in displaying images without being re-stricted by the physical hardware constraints of a conventional TV placed in a real space. Focusing on this feature of Virtual TV, we organized the elements of technical research on the viewing experience into three perspectives: reality, visibility and interactivity(12). This research was conducted in coopera-tion with Four Eyes Lab at University of California, Santa Bar-bara.

[References](1) H. Ohmata, M. Ikeo, H. Ogawa, C. Yamamura, M. Miyazaki, M. Ue-

hara and H. Fujisawa: “Hybridcast Connect X: Application Frame-work for Collaboration between Life Activity and TV Experience,” IPSJ DICOMO2017, pp.360-369 (2017) (in Japanese)

(2) H. Ohmata, M. Ikeo, H. Fujisawa and M. Uehara: “A Survey of Use of Online Video Services on TV and Smartphone and Needs Analysis of Cast Functionality,” Proc. of the ITE Annual Convention, 12C-2 (2017) (in Japanese)

(3) T. Takiguchi, M. Ikeo, H. Ohmata, H. Fujisawa and A. Fujii: “Research on User Behaviors of Linear TV Viewing in Cross-Device Environ-ments,” Proc. 80th National Convention of IPSJ (2018) (in Japanese)

(4) H. Ogawa, M. Ikeo, H. Ohmata, C. Yamamura and H. Fujisawa: “Sys-tem Architecture for IoT Services with Broadcast Content,” IEEE ICCE, CT05-3 (2018)

(5) H. Ogawa, H. Ohmata, C. Yamamura, A. Fujii and H. Fujisawa: “IoT

Device Usages to Enhance the Chance for Utilizing Broadcaster’s Data and Content,” Proc. 80th National Convention of IPSJ (2018) (in Japanese)

(6) K. Hiramatsu, H. Ogawa, M. Ikeo and H. Fujisawa: “A Study on TV Control using Voice Assistant Functions,” Proc. of the ITE Winter Convention, 15C-5 (2017) (in Japanese)

(7) H. Ogawa, C. Yamamura, E. Sawada, M. Kondo and H. Fujisawa: “Program related data utilization for automobile service,” Proc. of the ITE Annual Convention, 12C-1 (2017) (in Japanese)

(8) D. Sekine, H. Fujisawa and A. Fujii: “System model for utilization of program related data by user centric management function for per-sonal data,” Proc. 80th National Convention of IPSJ (2018) (in Japa-nese)

(9) Y. Kaneko, Y. Hoshi and M. Uehara: “Functions of Robot to Watch TV with People, and the Prototype,” RSJ2017AC1I2-04 (2017) (in Japa-nese)

(10) Y. Hoshi, Y. Kaneko and M. Uehara: “Direction Detection Method of TV and Viewer using Camera and Microphone Array Mounted on TV-Watching Robot,” Proc. of the ITE Annual Convention, 34B-4 (2017) (in Japanese)

(11) Y. Murasaki, Y. Kaneko, Y. Hoshi and M. Uehara: “A Study on For-mulating Development Guidelines of TV-Watching Robot,” IPSJ SIG Technical Reports, Vol. 2017-EIP-78, No. 14 (2017) (in Japanese)

(12) H. Kawakita, H. Fujisawa and T. Hollerer: “Research Challenge of 2D Moving Picture on AR Glasses as a New TV,” Proc. of the ITE Winter Convention, 21C-2 (2017) (in Japanese)

3.3 Security technologies

To provide more advanced convergence services of broad-casting and telecommunications, we researched cryptography and security technologies that can be used to provide secure and reliable services.

■Cryptography algorithm for database access controlService providers need information about viewers in order to

personalize services. Also, to allow viewers to receive un-known and unexpected services, it is preferable to provide many service providers with this information. An effi cient way to meet this requirement is to store viewer information in the cloud server and allow various providers to access the infor-mation. From the perspective of preserving viewers’ privacy, however, it is necessary to restrict access to the information. To meet these confl icting conditions, we researched an attrib-ute-based encryption scheme that can specify the access rights of providers to information according to their attributes. We previously developed an encryption scheme that can search for necessary data from encrypted data without decryption. In FY 2017, we implemented this encryption scheme on a PC and confi rmed that it operates at a practical speed(1)(2). We also de-veloped an encryption scheme with a light processing load that can use wildcards to decrypt part of the attributes with an arbi-trary attribute value(3).

■Cryptography algorithm utilization systemIn a service taking advantage of the linkage between cars,

mobile devices and TV, the user can receive personalized ser-vices in accordance with their location by giving service pro-viders their preference and location information. For such ser-vices, we proposed a cryptography system that allows the user to receive recommended information while keeping their pref-erence and location information concealed(4).

We also developed a key management scheme for cryptog-raphy suited for an access control system that allows the view-er to view paid broadcasting services even from outside the home by applying the attribute-based encryption to paid broad-

casting services(5)(6).

■Signature technology with robust securityWe researched ways to enhance the security of digital signa-

tures. We proved that the signature scheme that we developed in FY 2016 has strong security based on a difficult problem called the computational Diffi e-Hellman assumption(7)(8).

■Update method for scrambling schemeWe evaluated the security of our previously developed up-

date method for a scrambling scheme against attacks trying to reconstruct partial images and demonstrated that it has practi-cal security(9).

■Survey on cyber security of broadcasting systemsWe conducted a survey on the cyber security situation of

broadcasting systems and implemented necessary security measures in cooperation with relevant departments. We also surveyed cyber security trends among domestic and interna-tional broadcasters.

[References](1) G. Ohtake, R. Safavi-Naini and L. F. Zhang: “Outsourcing of Verifi able

Attribute-Based Keyword Search,” NordSec2017, pp.18-35 (2017)(2) G. Ohtake, R. Safavi-Naini and L. F. Zhang: “Implementation evalua-

tion of outsourcing scheme of verifiable attribute-based keyword search,” Symposium on Cryptography and Information Security (SCIS) 2018, 4A2-4 (2018) (in Japanese)

(3) G. Ohtake, K. Ogawa, G. Hanaoka, S. Yamada, K. Kasamatsu, T. Yamakawa and H. Imai: “Partially Wildcarded Ciphertext-Policy At-tribute-Based Encryption and Its Performance Evaluation,” IEICE Trans. Fundamentals. Vol.E100-A, No. 9, pp. 1846-1856 (2017)

(4) K. Kajita, G. Ohtake and K. Ogawa: “Application of Attribute-Based Keyword Search to Integrated Broadcast-Broadband System with Mobile Communication,” Symposium on Cryptography and Informa-tion Security (SCIS) 2018, 4A2-1 (2018) (in Japanese)

(5) K. Ogawa, S. Tamura and G. Hanaoka: “Key Management for Versa-

Page 32: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

30 | NHK STRL ANNUAL REPORT 2017

3 Internet technology for future broadcast services

tile Pay-TV Services,” STM2017, pp. 3-18 (2017)(6) K. Ogawa, S. Tamura and G. Hanaoka: “Versatile TV Services: Model

and Requirements,” ITE Winter Annual Convention 2017, 15C-6 (2017) (in Japanese)

(7) K. Kajita, K. Ogawa and E. Fujisaki: “A Constant-Size Signature Scheme with Tighter Reduction from CDH Assumption,” ISC2017, pp. 137-154 (2017)

(8) K. Kajita, K. Ogawa and E. Fujisaki: “A Constant-Size Signature Scheme with a Tighter Reduction from the CDH Assumption,” IACR ePrint https://eprint.iacr.org/2017/1116.pdf (2017)

(9) K. Ogawa and T. Inoue: “Practical Update of Scrambling Scheme and its Security Analysis,” ITE Trans. Media Technology and Application (MTA), Vol. 6, No. 1, pp. 110-123 (2018)

Page 33: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 31

4.1 Content elements extraction technology

■Video retrieval technologyRaw video footage stored in video archives and broadcast

stations is a valuable resource for program producers. To make active use of such footage, we are researching video retrieval technology.

We worked to improve the accuracy of our face detection technology for specifying face positions in each frame image of video footage. By increasing training data and refi ning the clas-sifi er using cascaded decision trees, we achieved a high detec-tion accuracy of about 90%(1). We also developed a system that identifi es cast members in a TV program in real time by using this technology and a face recognition technology to identify persons and exhibited it at the NHK STRL Open House 2017 (Figure 4-1).

Regarding a technology to detect character strings in video frame images (scene text), we studied a method to improve the accuracy by using “text attributes” such as “signboard,” “traffi c sign” and “name tag” that are estimated by an object recogni-

tion technology.We are also researching “fine” similar-image retrieval that

searches for images including the same object with a query im-age’s one. The use of a new feature vector focusing on the symmetry of object appearances and a similarity calculation method increased the accuracy of similar-image retrieval for certain genres such as “buildings”(2).

We are also working toward the practical application of vid-eo retrieval technology from various angles. We developed a video material management system that is equipped with a search capability using tags automatically assigned by the ob-ject recognition technology and a similar-image retrieval capa-bility in cooperation with program production sites which han-dle CG synthesis and video effects. We began the experimental use of the system at production sites. We also implemented the face recognition and scene text recognition technologies to de-velop a practical system that automatically assigns metadata to archive footage, which was proposed by program production sites. As part of our work on Smart Production Lab (see 4.3),

4 Technologies for advanced content production

As ways of producing high-quality attractive content services, we are progressing with our research on program production assistance technologies using artifi cial intelligence (AI) and big data, wireless tech-nologies for program contributions such as live sports coverage and music programs, and audio technol-ogies to pick up sports sound clearly.

In our work on content element extraction technology for assisting with program production, we im-proved the accuracy of our video retrieval technologies, which include face detection and recognition technologies, a technology for detecting character strings in video frame images and a similar-image re-trieval technology to search for images including the same object with query image’s one. We also began studying a technology to improve the accuracy to extract newsworthy tweets by applying image recogni-tion on our social media analysis system.

In our work on video summarization technology, we built a demonstration system for automatic video summarization. We also developed a system that automatically generates summarized video from posted video footage and used the system for a broadcast program. In addition, we began researching a technol-ogy to automatically convert 4K monochrome film video to color video and developed a conversion method using deep neural networks.

We continued with our research on content production support technology. In our work on social media analysis technology, we developed a system that extracts useful information for news production from tweets and classifi es it into 24 categories such as fi res and traffi c accidents. In our work on manuscript production assistance technology, we developed a system to automatically produce draft manuscripts of news reports on rivers in cases of heavy rains and typhoons by using past broadcast news manuscripts and river sensor information.

For the practical use of program production support technology using AI and big data, we promoted our Smart Production Lab activities. We introduced our technologies in a comprehensible way at the NHK STRL Open House 2017 and conducted various exhibitions and many demonstrations for visitors from relevant departments of the NHK Broadcast Center and external organizations to promote efforts to ad-vance the external application of our technologies and further strengthen cooperation with relevant de-partments.

In our research on wireless cameras, we continued to develop a transmitter and receiver for Super Hi-Vision toward their practical use in 2020. We prototyped equipment using an orthogonal frequency division multiplexing (OFDM) scheme and a single-carrier frequency domain equalization (SC-FDE) scheme as transmission schemes for 42-GHz-band wireless cameras that can use a 125-MHz-wide band and conducted evaluation experiments on their actual operation. We also researched a system to trans-mit intermediate-frequency (IF) signals of 42-GHz-band wireless cameras over Ethernet to transmission and reception base stations and enabled it to support higher-order modulation signals such as 1024 QAM by developing a high-accuracy synchronization method for clock frequencies.

In our work on sound technologies, we researched an ultra-directional microphone that has a very sharp directivity to capture sports sound clearly even in loud cheers of spectators. We developed a tech-nology for precisely simulating the performance of a shotgun microphone and proposed an effective sig-nal processing method for side-lobe suppression.

Page 34: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

32 | NHK STRL ANNUAL REPORT 2017

4 Technologies for advanced content production

we began developing a mechanism to improve the accuracy in extracting tweets related to accidents and disasters by auto-matically detection of images including fi res, fi re engines, etc. for a social media analysis system that we are developing to support news program production.

We participated in standardization activities at the Frame-work for Interoperable Media Services (FIMS) of AMWA-EBU, which specifi es service interfaces to develop future fl exible and highly extensible media production environments. We built a demo system equipped with a technique to divide video in shot units, an object recognition technology and a character string detection technology as FIMS-compliant services and exhibited the system at IBC 2017, an international broadcasting conven-tion held in Europe.

■Video summarization technologyWith the aim of supporting the production of previews and

digest videos of programs, we are researching a technology to automatically summarize program video.

We increased the number of programs targeted by our video summarization method that we developed in FY 2016 and de-veloped a demonstration system on the intranet(3). This system allows program producers to automatically create various pat-terns of summarized video by fl exibly setting the weight alloca-tion of various information that can be used as clues in sum-marization, such as viewer responses via Twitter analysis and factors base on image analysis such as cast members based on image analysis, open captions (telops) and camera work.

Using this technology, we developed a system that automati-cally generates summarized video from a huge amount of vid-eo footage posted by viewers and utilized it for producing a TV program, “Tokyo Miracle City 1,000 days before the 2020 Olym-pics Filmed by Everyone,” which was aired in October 2017. The system selected video automatically from about 1,400 posts of video footage captured by smartphones by using a face detection technology and a technology to recognize build-ings and crowds and arranged the selected video in a balanced manner to generate summarized video. The summarized video automatically generated by the system was introduced as “vid-eo edited by artifi cial intelligence (AI)” in the program.

In our research on the signifi cance evaluation of video sec-tions, we upgraded our technique for detecting subtle changes in facial expressions by analyzing face deformation data col-lected from sensors. We improved the detection accuracy by analyzing data of only the feature points that have a strong correlation with facial expression changes such as the eyes and mouth. Part of this research was conducted in cooperation with Waseda University.

■Automatic colorization technology for monochrome videoMonochrome video stored in broadcast stations is a valuable

video resource that can be used for various programs. In re-sponse to the needs of program producers, we began research on the automatic conversion of 4K-resolution-equivalent mon-ochrome fi lm video to color video.

We trained deep neural networks (DNNs) using a massive amount of program video collected from archive video and de-veloped a technique for automatically converting monochrome film video to color video(4). This technique consists of three types of DNNs: one for color estimation, one for color correc-tion and one for the propagation of color information to adja-cent frames. The technique also allows program producers to correct colors by simple methods to specify the colors and gen-erates color videos refl ecting the corrected colors.

[References](1) Y. Kawai, T. Mochizuki and M. Sano: “Face detection and recognition

for TV programs,” IEICE Technical Report, IE2017-83 (2017) (in Japa-nese)

(2) N. Fujimori, T. Mochizuki and M. Sano: “Fine-Grained Similar Image Retrieval Focused on Symmetry of Objects,” Proc. of Forum on Infor-mation Technology (FIT), H-041 (2017) (in Japanese)

(3) A. Matsui, T. Mochizuki, Y. Kawai and R. Endo: “Broadcast Video Summarization using Multimodal Contents Analysis,” IEICE Techni-cal Report, PRMU2017-26 (2017) (in Japanese)

(4) R. Endo, Y. Kawai and T. Mochizuki: “Study of Multi-Scale Residual Network for Image-to-Image Translation,” ITE Technical Report, ME (2017) (in Japanese)

4.2 Content production support technology

With the aim of supporting broadcasters with program pro-duction, we are researching a technology to collect the infor-mation useful for program production from social media and a technology to produce draft manuscript based on news database.

■Social media analysisWe devised an extraction method using neural networks that

collects useful information for news production from tweets on Twitter and classifi es it into 24 categories such as fi res and traf-fi c accidents(1). We also developed a social media analysis sys-tem using the devised method in cooperation with news pro-gram producers and started its trial operation (Figure 4-2)(2).

To organize the information collected by the social media analysis system, we developed a method for identifying inci-dents and accidents in virtual worlds such as movies and games(3) and a method for determining posts quoting reports of other news media(4).

■Manuscript production assistanceWe developed a system to assist with the production of draft

manuscript for initial news reports on river conditions in cases of heavy rains and typhoons by making use of a huge number of news manuscript stored in broadcast stations and sensor in-formation (Figure 4-3)(5). We used this system on a trial basis for news production during the rainy season.

[References](1) T. Miyazaki, S. Toriumi, Y. Takei, I. Yamada and J. Goto: “Classifying

Training Data for Extracting Important Tweets,” IFAT, Vol. 2017-IFAT-127, No. 1 (2017) (in Japanese)

(2) J. Goto, T. Miyazaki, Y. Takei and K. Makino: “Automatic Tweet De-tection based on Data Specifi ed through News Production,” Proc. of IUI2018 (2018)

(3) Y. Takei, T. Miyazaki, I. Yamada and J. Goto: “Tweet Extraction for News Production Considering Unreality,” Proc. of PACLIC 31 (2017)

Figure 4-1. Real-time face recognition technology

Akane Sato Kazuya Sato

Kotone Sato

Page 35: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 33

4 Technologies for advanced content production

(4) K. Makino, Y. Takei, T. Miyazaki and J. Goto: “Tweet Classifi cation for Reported Events Using Neural Networks,” NLP2018, pp. 1143-1146 (2018) (in Japanese)

(5) J. Goto, K. Makino, Y. Takei, T. Miyazaki and H. Sumiyoshi: “Auto-matic Manuscript Generation for News Production,” ITE Winter Con-vention, 13B-7 (2017) (in Japanese)

4.3 Smart Production Lab

Artificial intelligence (AI) has the potential to dramatically change the style of program and news production by enabling new technologies such as image analysis, speech recognition and text big-data analysis. In FY 2016, we aggregated our re-search outcomes of AI technologies for over 30 years and es-tablished a cross-functional research team, Smart Production Lab, in NHK STRL as a base for promoting working-style re-form by effi cient program production and accelerating interde-partmental cooperation and practical application for us-er-friendly broadcasting. The laboratory has been engaged in its activities with a structure that can respond to various needs of program producers by providing a permanent exhibition booth in STRL and an internal dedicated website that allows users to experience our research results as well as by centraliz-ing our consultation service.

In FY 2017, we conducted demonstrations for relevant de-partments at the STRL exhibition booth and studios at the NHK Broadcasting Center (Figure 4-4) to promote cooperation with departments interested in using AI technologies for broadcast-ing. This led to the practical utilization of various technologies including the automatic video summarization technology used for live broadcast programs, the automatic script production system for river information, which local broadcast stations be-gan using on a trial basis, and the audio description technology used for automated commentary in sports events. Production

technologies using AI are gaining considerable attention as means of not only changing program production methods but contributing to the working-style reform of broadcast program producers.

We also actively released information about each technology to external parties. We welcomed many visitors from external organizations to our exhibition booth in STRL and conducted many research presentations and technical exhibitions at do-mestic and international events such as the NHK STRL Open House 2017, the NAB Show, IBC 2017 and Inter Bee.

Figure 4-2. Social media analysis system

Production site

Analyze a ten percent-sample of tweets (approx. 8 million tweets/day in real time)

Learn useful tweets identified by production site

Present producers with extracted tweets

Information from the Foundation of River & Basin Integrated Communications

Produce a draft manuscript

Analyze the situation

Create a template

- Name of the river under observation

- Name of the observation spot

- Address of the observation spot

- Observation time

- Current water level

- Water level threshold for warning

- Latitude and longitude of the observation spot

“The Kumano River running through Wakayama Prefecture has exceeded the flood danger level of 14 m 95 cm in Hitari, Shingu City, as of 23:50.”

News database- Huge number of past manuscript

Figure 4-3. Manuscript production assistance system

Figure 4-4. Demonstration for relevant departments at the permanent

exhibition booth in STRL

Page 36: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

34 | NHK STRL ANNUAL REPORT 2017

4 Technologies for advanced content production

4.4 Wireless cameras

In our research on wireless cameras for Super Hi-Vision, we continued to develop a wireless transmitter and receiver with the goal of putting them into practical use in 2020.

We studied a single-carrier-frequency domain equalization (SC-FDE) scheme that uses 42-GHz-band radio waves, which can use a wide channel bandwidth of 125 MHz. The SC-FDE scheme is expected to increase average power of an output amplifi er compared with a conventional orthogonal-frequency division multiplexing (OFDM) scheme because its peak power is lower than that of the OFDM scheme. In FY 2017, we proto-typed a transmitter and receiver with 200-Mbps-class transmis-sion capacity that support diversity reception (single-input mul-tiple-output: SIMO) for actual operation. Experiments simulating a studio environment demonstrated that developed SC-FDE scheme can achieve a longer maximum transmission distance than the OFDM scheme (Figure 4-5)(1). We also studied a method for improving the performance of the SC-FDE scheme(2), 2×2 multiple-input multiple-output (MIMO) transmis-sion using the SC-FDE scheme(3) for 400-Mbps-class transmis-sion capacity and 4×4 MIMO transmission using the OFDM scheme.

We developed an Ethernet intermediate-frequency (IF) trans-mission system to transmit IF signals over Ethernet to the transmission and reception base stations for wireless camer-as(4). The system achieved the performance of equivalent CNR (carrier-to-noise power ratio) of 44 dB by high-accuracy clock synchronization method using the departure and arrival time stamps of each packet that transmits IF signals. This enabled the use of the system for microwave-band fi eld pick-up units (FPUs) using higher-order modulation such as 1024 QAM (quadrature amplitude modulation). Since the use of Ethernet enables the transmission of a bundle of multiple IF signals, we confi rmed the feasibility of transmitting fi ve lines’ worth of IF signals of a microwave-band FPU with a single 10-Gbps Ether-net.

[References](1) Y. Matsusaki, F. Yamagishi, F. Ito, H. Kamoda, K. Imamura and H.

Hamazumi: “A Performance Comparison of SC-FDE and OFDM for 200 Mbps Wireless Camera in 42 GHz Band,” Proceedings of the 2018 IEICE General Conference, B-5-93 (2018) (in Japanese)

(2) F. Yamagishi, Y. Matsusaki, F. Ito, H. Kamoda, K. Imamura and H. Hamazumi: “A Study on Boost Ratio of Pilot Signal for SC-FDE,” Pro-ceedings of the 2018 IEICE General Conference, B-5-94 (2018) (in Japanese)

(3) Y. Matsusaki, H. Kamoda, K. Imamura and H. Hamazumi: “Millime-ter-wave 2x2 MIMO SC-FDE for an 8K Wireless Camera,” 2018 IEEE Radio & Wireless Week (RWW 2018), MO3A-4 (2018)

(4) K. Aoki, K. Imamura and H. Hamazumi: “A Basic Study on a Radio over Ethernet System for Millimeter-wave Wireless Camera,” 2017 ITE Annual Convention, 32E-4 (2017) (in Japanese)

4.5 Technological development of ultra-directional microphone

We are conducting R&D on a microphone that can capture sports sound clearly even in loud cheers of spectators. For the recording of sound in a specifi c direction selectively, we have been developing a shotgun microphone, which has sharper di-rectivity than a conventional one, since FY 2015. In FY 2017, we studied a method for designing an acoustic tube for the mi-crophone and a signal processing technology for improving the directivity.

A shotgun microphone achieves narrow directivity by using the effect of an acoustic tube with slits installed forward of the diaphragm. We previously developed a simulation technology to accurately predict the characteristics of a shotgun micro-phone by physically calculating the internal sound fi eld of the acoustic tube. This technology enables the efficient design of acoustic tubes. Using this method, we developed an ultra-di-rectional microphone having a longer acoustic tube than a conventional one and exhibited it at the NHK STRL Open House 2017 (Figure 4-6).

Furthermore, in FY 2017, we developed a method to optimize the design parameters of an acoustic tube, studied side-lobe suppression by signal processing and demonstrated their effec-tiveness(1).

Its suppression performance was compared with that of a conventional shotgun microphone (above) at the NHK STRL Open House 2017

[References](1) Y. Sasaki and K. Ono: “Shotgun microphone which is designed using

genetic algorithm and combined with side-lobe canceller,” Autumn Meeting of the Acoustical Society or Japan, 2-P-43 (2017) (in Japa-nese)

0

10

20

30

40

50

60

70

Ma

xim

um

tra

nsm

issio

n d

ista

nce

[m

]

Output power of power amplifier [dBm]

SC-FDE scheme

OFDM scheme

64m

58m

20 21 22 23 24 25 26 27 28 29 30

Antenna gain

 Transmission 2 dBi (Omni)

 Reception 12 dBi (Electromagnetic horn)

Multipath margin 13 dB

Figure 4-5. Relationship between output power of power amplifier and

maximum transmission distance

Figure 4-6. Ultra-directional microphone (below)

Page 37: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 35

5 User-friendly broadcasting technologies

5.1 Information presentation technology

NHK STRL is researching technology for generating sign lan-guage CGs from sports information as well as kinesthetic dis-play technology for conveying the shape and hardness of a 3D object so that people with hearing impairments and visually impairments can enjoy broadcasts.

■Sign language CGs for presenting sports informationIn FY 2017, we began research on a sign language CG gener-

ation technology for sports information. We aim to offer sports programs that people with hearing impairments can enjoy bet-ter by using a CG character who describes sports information with sign language in sports programs.

We prototyped a system that automatically generates sign language CGs for explaining the status and rules of the game by using competition data delivered during a game. We created sign language CG templates in advance, in which athlete names and points can be replaced. During a game, the system

applies the competition data to the templates and generates

To expand our user-friendly broadcasting that conveys information quickly and accurately to all view-ers, including people with disabilities and non-native speakers of Japanese, we are conducting R&D on technologies for automatically converting broadcast content data and providing each user with informa-tion in the optimum way.

In our work on information presentation technology, we began a study on a sign language CG genera-tion technology for sports information. We developed a system that automatically generates sign lan-guage CGs of fi xed phrases for explaining the status and rules of games by using competition data deliv-ered during a game and conducted experiments on delivering sign language CGs for sports programs within NHK. We also began researching automatic translation from arbitrary sentences of sports news to sign language CGs and prototyped a system that analyzes the sentence structure of sign language to sup-port sports news, whose long sentences have a complicated structure.

In our work on haptic presentation technology for conveying the information of object’s movements in video through tactile sensation, we began research on applying the technology for fast-moving sports content that is diffi cult to convey through speech information. We conducted a basic study on informa-tion presentation using three types of stimuli (vibration, sliding and impact) and obtained knowledge about the perception and discrimination of the stimuli.

In our work on speech recognition technology for the transcription of video footage, we developed an end-to-end speech recognition technology that directly matches characters to the input audio by intro-ducing deep neural networks (DNNs). We also developed an interface that enables the effi cient modifi ca-tion of speech recognition results and made the system available to news producers for evaluation.

In our work on audio description technology, we developed a system that automatically generates sports commentaries describing game information such as athlete names and current scores by using competition data provided from external parties. On a big sport event, we conducted a public service of providing “automated commentaries” on NHK’s website and Hybridcast. We also began investigating a DNN-based speech synthesis technology for news reading toward the full-fl edged use of speech synthesis technology for broadcast programs.

In our research on language processing technology, we studied the production of reading assistance in-formation of news scripts for non-native speakers in Japan and began a study on machine translation for the effi cient production of foreign language content. In our work on news scripts with reading assistance information, we developed an interface that automatically generates information such as easy Japanese explanations for difficult expressions, dictionary information and kana syllables of Chinese characters and allows the manual modifi cation of information errors. In our work on machine translation, we pre-pared English and Spanish sentence pairs and developed a translation system that is accessible within NHK to support the production of Spanish closed captions for international broadcasting.

In our research on image cognition analysis, we researched image features such as the size and move-ment suitable for the wide-fi eld-of-view environment of 8K Super Hi-Vision. We investigated the estimat-ed area of the main object and the impressions given by the entire image of various 8K images such as scenes, persons and objects and demonstrated that the preferred image size has a strong correlation with the actual size of the main object and video impressions such as “dynamism” and “spaciousness.” We also conducted psychological experiments on the degree of unpleasantness caused by shaking images and identifi ed the infl uence of the shaking areas and viewing angle.

Figure 5-1. Prototype sports sign language CG application

Whistle(Device vibration)

Visualization of excitement

Game score

Athlete information

Game status

Rule explanation

Progress of gameGame video

Sports sign language CG application

Page 38: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

36 | NHK STRL ANNUAL REPORT 2017

5 User-friendly broadcasting technologies

sign language CGs in real time. At the NHK STRL Open House 2017, we exhibited an application that displays sign language CGs for game information on a tablet device, together with im-ages and text presenting the information of individual athletes and the excitement at the venue (Figure 5-1). We also proto-typed a system that displays competition video and sign lan-guage CGs on the web browser and used it on an experimental basis within NHK.

We began research on machine translation from arbitrary sentences of sports news to sign language CGs. As a key tech-nology, in FY 2017, we built a sports news corpus containing approximately 3,000 pairs of sentences based on collected sports news and translation results by sign language interpret-ers. It is diffi cult to apply machine translation to sports news, which comprises long sentences with complicated structures. To address this problem, we prototyped a system to analyze the sentence structure of sign language. We also conducted evaluation experiments on the quality of sign language CGs us-ing the modification interface that we developed in FY 2017. The results of experiments on fi nger alphabet reading by hear-ing-impaired people demonstrated that the percentage of cor-rect answers is increased by modifying the movements.

We released a website for evaluating our sign language CGs for the weather information of the seven prefectures in the Kanto Region in February 2017. It received about 500 survey responses, 90% of which said the CGs are generally under-standable. Some respondents commented that numbers are diffi cult to see when shown on the CG character’s face, while others said that the hands are not clearly visible when overlap-ping with other skin parts. In response to the feedback, we made a change to prevent the hands and fingers expressing numbers from overlapping the face and also modified the clothes and colors. Part of this study was conducted in cooper-ation with Kogakuin University.

■Tactile presentation technology for touchable TVIn FY 2017 we began a study on a technology for conveying

the information of movements in video through people’s skin. Our current focus is to develop a method for conveying a mo-ment the bat hits a ball, direction of a ball moving and game status by tactile sensation for fast-moving sports content that is diffi cult to convey through speech information. In FY 2017, we classifi ed mechanical stimuli into three types, sliding, vibration and acceleration, and identifi ed fundamental conditions for the perception and discrimination of these stimuli. For sliding, we demonstrated that a stimulator that moves in a straight line as if rubbing the skin expresses the speed and direction of move-ments. To express the direction of 3axis, we developed a sys-tem that vibrates on each surface of a cube that can be wrapped in the palms. To express the direction of 3axis, we de-veloped a system that vibrates on each surface of a cube that can be wrapped in the palms. A special suspension mechanism suppresses the vibration transmitted from the adjacent surface to make the vibration of each surface to be perceived inde-pendently to a person (Figure 5-2). Generally, the perception of a stimulus to the skin varies greatly between individuals, but we found that it can be applied to information presentation by adjusting the intensity and interval of stimuli and using appro-priate stimuli for each person. Part of this study was conducted in cooperation with Niigata University and Yamagata Universi-ty.

We are researching a technology for presenting 2D informa-tion such as diagrams in textbooks and the shapes of printed letters to people with visual impairment. We continued with our development of a fi nger-leading presentation system com-bining a tactile display that expresses shapes and lines by the unevenness and vibrations of pin arrays that move up and down and a method for leading fi ngers by a kinetic robot arm. In FY 2017, we conducted demonstration evaluations of a fi n-ger-leading presentation system that can be controlled remote-ly via the LAN which we previously developed for educational purposes. We evaluated a function that allows the teacher to remotely guide the fi ngers of multiple students concurrently in a mock class on acupuncture and moxibustion. The results demonstrated that it is an effective means of introducing con-cepts in education. We also added the capability of presenting the stroke order of characters to enhance its function as a tool for studying Chinese character and kana characters. In addi-tion, we developed a user interface that allows visually impair-ment and hearing impairment who have not mastered braille to read documents written in kana characters. The develop-ment of these technologies showed the feasibility of using this system for many purposes in the fi elds of education and wel-fare. Part of this research was conducted in cooperation with Tsukuba University of Technology.

[References](1) T. Uchida, T. Miyazaki, M. Azuma, S. Umeda, N. Katou, H. Sumiyoshi,

Y. Yamanouchi and N. Hiruma: “Sign Language Support System for Viewing Sports Programs,” Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, AS-SETS2017, pp.339-340 (2017)

(2) S. Umeda, T. Uchida, M. Azuma, T. Miyazaki, N. Katou, H. Sumiyoshi, N. Hiruma and Y. Yamanouchi: “Proposal and Evaluation of Connec-tion Method of Motion Unit for Generating Easy-to-understand Sign Language CG,” IEICE Technical Report, vol.117, no.251, SP2017-51, WIT2017-47, pp.95-100 (2017) (in Japanese)

(3) M. Azuma, T. Handa, T. Shimizu and S. Kondo: “Development of Vi-bration Cube to Convey Information by Haptic Stimuli,” HAPp1-5L, Proceedings of the 24th International Display Workshops, Vol. 24, pp.128-130 (2017)

(4) T. Sakai, T. Handa, M. Sakajiri, T. Shimizu, N. Hiruma and J. Onishi: “Development of Tactile-Proprioceptive Display and Effect Evaluation of Local Area Vibration Presentation Method,” Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol. 21, No. 1, pp.87-99 (2017)

Figure 5-2. Vibration cube (appearance) and the structure for suppressing

vibration propagation

Disk part

Vibrator

BearingSilicon rubber

Inner box

Outer frame

5.2 Speech recognition technology

The transcription of speech in video footage to clarify the content of the footage is indispensable to produce programs from a massive amount of collected video materials. For the speedy delivery of accurate programs to viewers, we are re-searching a system to help transcribe video footage immediate-

ly. In FY 2017, we worked to increase the accuracy of speech recognition and develop an interface and conducted demon-stration experiments with the aim of realizing a transcription production system using speech recognition.

Page 39: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 37

5 User-friendly broadcasting technologies

■Speech recognition for transcription of video footageSpeakers in video footage that requires transcription speak

with different levels of articulation under a less favorable sound recording condition than that in a studio. This causes the deterioration of the word accuracy rate. For this reason, the speech recognition of video footage is more challenging than conventional speech recognition in live broadcast programs. We began developing a transcription assistance system for press conferences, which require immediacy, and interviews, which are recorded in a relatively favorable environment.

Since the speaker often does not speak in the correct pho-neme sequence according to the dictionary, there is a limit to the improvement of the recognition accuracy by conventional speech recognition using a pronunciation dictionary. We there-fore employed DNNs to improve the recognition accuracy of end-to-end speech recognition that directly matches characters to the input audio. In speech recognition processing, it is gen-erally difficult to learn acoustic features of characters whose frequency of appearance in training data is signifi cantly low. In FY 2017, we devised a method that learns a group (class) of characters whose acoustic features are diffi cult to learn and re-constructs the original characters by using clues such as the probabilities of character and word sequences. This method improved the recognition accuracy(1)(2). It is also diffi cult to iden-tify the topic of video footage, which covers diverse areas. To address this problem, we improved the accuracy of a language model for the probabilities of word sequences by expressing di-verse and complex topics with 200-dimensional numerical vec-tors(3).

We developed an interface that allows the user to refer to speech recognition results efficiently together with the video and audio and to modify the recognition errors as necessary with a minimum operation(4). Since this interface is available as

a web application, the user does not need to install software to check the speech recognition results for input video files on their PC (Figure 5-3). For some video materials such as press conferences, it is not necessary to transcribe every word of every speech in video footage. Thus, our interface automatical-ly divides video into segments based on transition points and pauses in video and assigns a keyword to each segment for ef-fi cient reference to necessary segments. Words in the recogni-tion results are highlighted in synchronization with video play-back so that the user can easily fi nd the recognized characters corresponding to the audio. We also linked video playback and stop with a text-editing operation to avoid the trouble of play-ing and stopping the video and changing the playback start point. This interface reduced the time necessary for the correc-tion of recognition errors by 30%.

We released the system to news program producers and conducted evaluation experiments to verify its effectiveness(5).

■ Introduction of DNN speech recognition to closed-captioning systemIn response to requests for efficient closed captioning and

more programs with closed captions, we worked to introduce DNNs into speech recognition used for closed captioning. Since recent language models are large in scale and not suited for loading recent news articles in a short time, we developed an algorithm that recognizes multiple word networks (language models) by connecting them. The recognition algorithm has a DNN calculation unit and a search unit structured in parallel for high-speed operation. We also optimized the algorithm for out-putting recognized words sequentially for the new method to enable the rapid correction of recognition errors. We plan to continue to develop interfaces for connecting these technolo-gies with existing equipment and enhance speech recognition systems used for closed captioning.

[References](1) H. Ito, A. Hagiwara, M. Ichiki, T. Mishima, S. Sato and A. Kobayashi:

“End-to-end Speech Recognition for Languages with Ideographic Characters,” APSIPA ASC, 118 (2017)

(2) H. Ito, A. Hagiwara, M. Ichiki, T. Mishima, S. Sato and A. Kobayashi: “End-to-end Speech Recognition using Class Labeling,” Autumn Meetings of the Acoustical Society of Japan, 1-R-12, pp. 79-82 (2017) (in Japanese)

(3) A. Hagiwara, H. Ito, M. Ichiki, T. Mishima, S. Sato and A. Kobayashi: “Domain Estimation Language Model by Distributed Representation,” Autumn Meetings of the Acoustical Society of Japan, 2-Q-4, pp. 133-134 (2017) (in Japanese)

(4) T. Mishima, M. Ichiki, A. Hagiwara, H. Ito, S. Sato and A. Kobayashi: “Development of transcription interface for video footage,” ITE An-nual Convention, 23D-3 (2017) (in Japanese)

(5) T. Mishima, M. Ichiki, A. Hagiwara, H. Ito, S. Sato and A. Kobayashi: “Experimental Verification of Transcription Interface Using Speech Recognition,” ITE Winter Annual Convention, 12C-6 (2017) (in Japa-nese)

Figure 5-3. Transcription interface

Transcription interface Web application

Automatic keyword assignment

Editing operation linked with playback and stop

Press conference

Au

tom

atic

se

gm

en

tatio

n

5.3 Audio description technology

We are researching “audio description” technologies, which produce voice explanations for live broadcast programs so that people with visual impairment can enjoy live sports programs better. Our research aims to fi rst realize “automated commen-tary”, which automatically generates commentaries for the content in place of human announcer, and then realize “auto-mated audio description”, which supplements human com-mentaries with auxiliary voice explanations for visually im-paired people. In FY 2017, we identified issues toward the development of an automated commentary service and audio

description broadcasting service for sports programs.

■Automated commentary service in sports programsWe built a system for providing live footage of sports pro-

grams with voice commentary and subtitles, which NHK auto-matically generates from competition data delivered by exter-nal parties, and distributing the video on the internet and Hybridcast. Using this system, we offered an automated com-mentary service for sports programs on a special website of

Page 40: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

38 | NHK STRL ANNUAL REPORT 2017

5 User-friendly broadcasting technologies

NHK Online and Hybridcast. (A total of 17 videos were deliv-ered for four sports: ice hockey, curling, bobsleigh-skeleton and luge.) (Figure 5-4)

We made four modifi cations to the system that we built in FY 2016. First, we implemented a capability to automatically pro-duce speeches for an entire program including not only run-ning commentaries for the game but also a pre-game rule ex-planation and venue introduction as well as a post-game results summary according to a schedule from the delivery start time to the end time. Second, we developed a technique that specifies the parts to be stressed in speech during com-mentary production processing and refl ects them in the voice by controlling the speech synthesis unit. Third, we developed a technology for extracting and using events necessary for speech generation from image data contained in the game data and applied the technology to some sports. Fourth, we devel-oped a new speech synthesis technology that uses DNNs(1). Training a synthesis model by adding a small amount of live-tone speech data to existing reading-tone speech training data made it possible to synthesize high-quality live-tone speech from a small amount of training data. For the high-quality re-production of a stressed intonation at the end of a sentence, which is commonly found in running commentaries, we also developed a method for training a model by classifying sen-tence-end intonations into declarative and stress types.

■Challenges in offering automated commentary as a service for visually impaired peopleWe evaluated the extent to which automated commentaries

helps people with visual impairment understand the situation in games. The results of a comparison between radio, which conveys information by speech only, and automated commen-tary demonstrated that automated commentary helps them un-derstand the situation in games to the same extent as radio (Figure 5-5)(2).

We also compared running commentaries between TV and radio, classified comments and examined what information should be included to complement TV running commentaries. Results showed that radio conveys a great deal of information in addition to the game status and event occurrence and that it is necessary to use not just competition data but also additional information depending on the sport in order for automated commentary to provide the same amount of information as ra-dio(3). Furthermore, we studied an automated audio description technology for adding an audio description to programs having a running commentator’s speech. We demonstrated that an au-tomated audio description can help the user’s understanding of

programs even when two voices overlap if we make the voice quality of the audio description audibly separable from that of the program speech(4).

■Speech synthesis technology for news readingWe began investigating a DNN speech synthesis technology

for news reading and prepared training data with the aim of using it for full-scale broadcast programs.

■Application for practical useWe prototyped a system that synchronizes an audio descrip-

tion played on smartphones and tablet devices with a program by using a Hybridcast-linked application. We also measured the time needed to distribute the audio description using external servers to verify its feasibility of application for live broadcast-ing.

[References](1) K. Kurihara, A. Imai, H. Sumiyoshi, Y. Yamanouchi, N. Seiyama, S.

Sato, I. Yamada, T. Kumano, R. Tako, T. Miyazaki, M. Ichiki, T. Takagi, S. Oshima and K. Nishida: “Automatic Generation of Audio Descrip-tions for Sports Program,” International Broadcasting Convention (IBC) Conference (2017)

(2) M. Ichiki, T. Shimizu, A. Imai and T. Takagi: “Study on Automated Audio Description System for Accessibility Broadcasting Services in

Figure 5-5. Subjective evaluation results on the understanding of content with/

without automated commentary (audio description) (5: Very good, 1: Very

poor)

Sighted people

Visually impaired people

Understanding of the content

Error bar: Standard error

Without audio description

With audio description

Ave

rag

e v

alu

e

5

4

3

2

1

Figure 5-4. Concept of automated commentary service

Video and sports audio only

Create real-time data

In front of the goal...

Synthesize speech

Generate description

Automated commentaryAccuracy information

Course information

Current score

Venue

Broadcast station

Home

Outdoors

l

Home

y

Wow

!

Wow!

In front of the goal, Japan shoots. Goal. They did it! Japan has come from behind.

Page 41: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 39

5 User-friendly broadcasting technologies

5.5 Image cognition analysis

We are engaged in research to identify the relationship be-tween image features, such as size and movement, and subjec-tive preferences and the degree of unpleasantness with the aim of supporting the production of video suitable for the wide fi eld-of-view environment of 8K Super Hi-Vision (SHV).

■Measurement of preferred image sizeTo identify image features suitable for viewing with a large

screen, we are studying the preferences of the image size by performing psychological experiments. In FY 2017, we con-ducted psychological experiments to measure the preferred im-age size of various SHV images such as scenes, persons and

5.4 Language processing technology

As a new way to provide information to non-native speakers in Japan, we are researching news scripts with reading assis-tance information. We also began research on machine trans-lation for effi ciently producing foreign language content.

■News scripts with reading assistance informationNews texts available on the internet are not easily under-

standable for many non-native speakers because they contain many difficult Chinese characters and expressions. We are therefore researching various kinds of reading assistance infor-mation to make news scripts easier to understand for them. As reading assistance information, we prepared easy Japanese ex-planations for difficult expressions, kana syllables of Chinese characters and dictionary information for diffi cult words. The user can adjust the display of reading assistance information by slider and button operations.

Among these kinds of reading assistance information, easy Japanese explanations are automatically generated by a ma-chine translation system while kana syllables and dictionary information are automatically generated by a morphological analyzer. Automatic generation, however, entails errors. We therefore developed an interface to correct errors in a simple and easy way and demonstrated the actual production of news scripts with reading assistance information(1).

We conducted a comparative analysis of easy Japanese for people with intellectual disabilities. Specifically, we analyzed articles on the same topics from NHK’s news website “NEWS-WEB,” its easy Japanese news website “NEWSWEB EASY” and a welfare organization’s newspaper for people with intellectual disabilities “STAGE”(2). The results showed that more easy words and shorter sentences are used in NEWSWEB EASY and STAGE than in NEWSWEB. They also identifi ed a difference in the appearance of technical terms between NEWSWEB EASY and STAGE. This research was conducted in cooperation with Shukutoku University Junior College, the University of the Sa-cred Heart, Tokyo, and Future University Hakodate.

■Machine translation technologyWe began research on English-Spanish machine translation

with the aim of supporting the production of Spanish closed captions for video-on-demand (VOD) travel programs on NHK WORLD. We prepared about 50,000 pairs of English and Span-ish sentences from fi ve years of scripts of NHK WORLD RADIO JAPAN and about 5 million pairs of sentences extracted from

the internet. We prototyped a neural network translation sys-tem using these sentence pairs for learning. We conducted an experiment on translating closed captions for the VOD travel programs and demonstrated that the use of the translation sys-tem increases the translation volume by 1.6 times compared to the manual translation. We also developed a web interface for the translation system that is accessible within NHK (Figure 5-6). Moreover, we prepared 40,000 pairs of Japanese and Eng-lish news sentences by manual translation for the machine translation of Japanese and English news scripts. This research was conducted in cooperation with the National Institute of In-formation and Communications Technology (NICT).

[References](1) http://aamt.info/app-def/S-102/mtsummit/2017/technology-showcase/(2) A. Uchinami, K. Iwata, T. Kumano, I. Goto, H. Tanaka and H. Otsuka:

“Easy-to-understand Japanese: A comparison between Japanese for People with Intellectual Difficulties and Japanese for Non-native Speakers,” The Japanese Journal of Language in Society, Vol.20, No.1, pp.29-41 (2017) (in Japanese)

a Sports TV Program,” ITE Annual Convention, 33D-2 (2017) (in Jap-anese)

(3) S. Sato, H. Sumiyoshi, A. Imai, Y. Yamanouchi, N. Seiyama, T. Shimi-zu, H. Kaneko, T. Kumano, T. Miyazaki, K. Kurihara and M. Ichiki: “Classification for Automatic Audio Description of Sports Broad-

casts,” ITE Winter Annual Convention, 11C-4 (2017) (in Japanese)(4) M. Ichiki, T. Shimizu, A. Imai and T. Takagi: “Investigation of Simulta-

neous hearing of Live Commentaries and Automated Audio Descrip-tions,” Autumn Meeting of ASJ, 3-5-7 (2017) (in Japanese)

Figure 5-6. English-Spanish translation system

Page 42: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

40 | NHK STRL ANNUAL REPORT 2017

5 User-friendly broadcasting technologies

objects shown on a large display and investigated the relation-ship between the preferred image size and the features of the image displayed. We analyzed the several values of image fea-tures by investigating the area presumed to be the main object and the impression of the entire image through subjective eval-uations. The results demonstrated that the preferred image size has a strong correlation with the actual size of the main object in the image (Figure 5-7). They also showed that it has a high correlation with indicators for quantifying the impressions giv-en by the entire image such as “power” and “space”(1). We plan

to study the effects of the display size and viewing distance.

■Shaking image analysis technologyViewers may experience unpleasantness similar to motion

sickness when viewing video showing a lot of movement in a large viewing angle. We are researching a technology for ana-lyzing such images and estimating the degree of unpleasant-ness caused by viewing them. In FY 2017, we conducted psy-chological experiments using SHV images to investigate how the amount of recognizable shaking motion of the screen (cog-nitive quantity of shakiness) and the degree of unpleasantness vary with the degree of conspicuousness (visual saliency) of multiple shaking areas and their positional relationship. The results showed that both the cognitive quantity of shakiness and the degree of unpleasantness tend to saturate at the view-ing angle of SHV and that the degree of unpleasantness is pre-dominantly affected by highly salient shaking areas and is low-er when the distance between them is larger, although the cognitive quantity of shakiness is not affected by either the sa-liency or the positions of shaking areas(2).

On the basis of the results, we refi ned our algorithm for esti-mating the cognitive quantity of shakiness and the degree of unpleasantness from the physical characteristics of images such as the spatiotemporal frequency component of shaking areas and their positional relationship. We verifi ed the validity of values estimated by this algorithm through experiments us-ing general images.

[References](1) M. Harasawa, Y. Sawahata and K. Komine: “The factors affecting

preferred physical size of high-resolution moving images,” The Visual Science of Art Conference 2017, Berlin, Aug (2017)

(2) M. Tadenuma: “Cognition Degree of Shakiness and Unpleasantness Degree for Wide-Field Shaking Images,” ITE Tech. Rep., Vol. 42, No. 4, MMS2018-2, HI2018-2, ME2018-2, AIT2018-2, pp. 93-98 (2018) (in Japanese)

Figure 5-7. Results of psychological experiments on the preferred image size

0.1 1 10 100

20

40

60

80

Actual size of the area determined as the main object [m]

Correlation r=0.774

Ra

tio

of

the

pre

ferr

ed

im

ag

e s

ize

to

th

e

full-

scre

en

dis

pla

y [

%]

(● : Individual images)

Page 43: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 41

6 Devices and materials for next-generation broadcasting

6.1 Advanced image sensors

■Three-dimensional integrated imaging devices

We are researching imaging devices with a 3D structure in our quest to develop a next-generation image sensor having

more pixels and a higher frame rate. These devices are fabri-cated by stacking a photodetector and a signal processing cir-cuit after each of them is formed. They have a signal process-ing circuit for each pixel directly beneath the photodetector.

We are conducting R&D on the next generation of imaging, recording and display devices and materials for broadcast services such as 8K Super Hi-Vision (SHV) and future three-dimensional (3D) television.

In our research on imaging technologies, we made progress in developing 3D integrated imaging devic-es having many pixels and a high frame rate, solid-state image sensors overlaid with multiplier fi lms for 8K cameras with high sensitivity, and organic photoconductive fi lms to realize a compact and high-quali-ty single-sensor 8K camera. In our work on 3D integrated imaging devices, we prototyped a device with 320×240 pixels (about 50 × 50 μm2) and demonstrated that it can achieve characteristics with a wide dy-namic range by taking advantage of its capability of pixel-parallel signal processing. Our work on sol-id-state image sensors overlaid with multiplier fi lms included reducing the dark current by improving the crystallinity of crystalline selenium that constitutes multiplier fi lms and prototyping an 8K CMOS circuit for reading signals from stacked multiplier fi lms. In our work on organic photoconductive fi lms, we im-proved the quantum effi ciency by applying new materials and realized high-effi ciency transparent cells for blue and red.

In our research on recording technologies, we continued with our work on holographic memory with a large capacity and high data transfer rate for SHV video signals and on a recording device with no mov-ing parts that utilizes the motion of magnetic domains in magnetic nanowires. In holographic memory, we developed a multilevel recording technology that is expected to achieve a large capacity and high speed. As the elemental technologies, we developed a decoding method for reproduced data based on machine learning and an error correction code technology. We also developed a multilevel recording re-production technology using amplitude modulation as a modulation scheme suited for 4-level multi-mod-ulation. In our research on magnetic nanowires, we investigated suitable magnetic nanowire materials, conducted simulations of magnetic domain formation and a driving domain analysis, and widened the bandwidth of our recording and reproduction evaluation system in order to increase the driving speed of magnetic domains. By incorporating an artificial superlattice structure using cobalt (Co)/terbium (Tb) multilayered fi lms that reduces magnetization, which hinders high-speed driving, and the spin Hall effect produced by platinum (Pt), we achieved magnetic domain driving in excess of 15 m/s, more than 10 times that of conventional devices.

In our research on displays, we studied a technology to increase the lifetime and color purity of organic light-emitting diode (OLED, a luminescent diode using organic materials) devices and a technology to in-crease the image quality and reduce the driving power consumption of displays with the aim of achieving large SHV displays for home use. We also developed elemental technologies for solution-processed devic-es for future large fl exible displays. To increase the lifetime, we fabricated a practical inverted OLED hav-ing a luminance half-life of 10,000 hours or more at a low temperature and searched for host materials for the light-emitting layer suited for a longer lifetime. We also realized a green OLED with high color pu-rity through device modifi cation focused on the chemical structure of materials. Our work on higher im-age quality and lower power consumption included the development of a technology for shortening the channel length of oxide TFTs and proposing a method for controlling the driving power and display lumi-nance for displaying HDR video. We conducted simulations to verify the power-saving effect of the meth-od. For solution-processed devices, we developed a patterning fabrication process for oxide semiconduc-tors that uses a photoreaction and a high-effi ciency quantum dot light-emitting diode (QD-LED) using zinc sulfi de-silver sulfi de indium solid solution.

Buried electrode

Signal processing circuit

PhotodetectorPixelLight

Cross-sectional structure of a pixel

Pixel-parallel signal processing

Bonded surface

Support wafer

Figure 6-1. 3D integrated imaging device

Page 44: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

42 | NHK STRL ANNUAL REPORT 2017

6  Devices and materials for next-generation broadcasting

Since this structure enables signals from all pixels to be read out simultaneously, it makes it possible to output digitized sig-nals in pixel units, which can achieve a high frame rate in reading signals even when the number of pixels increases (Fig-ure 6-1).

We previously devised a noise-canceling circuit that is capa-ble of pixel-parallel operation and stacking into pixels and de-veloped elemental technologies such as a technology for min-iaturizing a buried electrode that connects the photodetector with the signal processing circuit and a highly reliable stacking process.

In FY 2017, we prototyped an imaging device using an array of 320×240 pixels (about 50 × 50 μm2). The device is designed to have current buffers within pixels and signal-reading paths to ensure a circuit structure that can output digital values sta-bly from pixels. Regarding the fabrication process, we succeed-ed in connecting a buried electrode 5 μm in diameter with an alignment accuracy of 1 μm or less by using the elemental technologies that we developed in FY 2016. The prototype im-aging device achieved 16-bit output in a wide dynamic range of 96 dB by taking advantage of the feature of pixel-parallel signal processing(1) (Figure 6-2).

This research was conducted in cooperation with the Univer-sity of Tokyo.

■ 8K solid-state image sensor overlaid with multiplier fi lm

In next-generation broadcasting services such as 8K, the amount of light incident on each pixel of the imaging device decreases as the resolution and frame rate of the camera in-crease. As a drastic solution to this problem, we are developing a solid-state image sensor overlaid with a photoconductive fi lm (multiplier fi lm) on a CMOS circuit (Figure 6-3). The multiplier film can obtain the effect of electric charge multiplication by only applying a low voltage. In FY 2017, we worked to reduce the dark current of crystalline selenium films that constitute multiplier fi lms and prototyped an 8K CMOS circuit on which a multiplier fi lm is overlaid.

Since the dark current that occurs with crystalline selenium films is considered to be closely related to the crystallinity of selenium, we evaluated the crystallinity using an X-ray diffrac-tion method. The results showed that the crystalline state of a tellurium layer, which is inserted to prevent fi lms from peeling from the substrate, affects the crystallinity of selenium. We therefore developed a new deposition process for improving the crystalline state of the tellurium layer. This process suc-cessfully suppressed the increase in dark current when the ap-plied voltage increases (Figure 6-4)(2).

The unevenness between a pixel electrode and its surround-ings in a CMOS circuit to be overlaid with a multiplier film needs to be as small as possible to prevent fi lm defects caused by the excessive concentration of electric fi eld. Since our proto-type 8K CMOS circuit had a maximum surface unevenness of about 900 nm, we formed an insulating layer around the pixel electrode and applied a planarization process by polishing. We confi rmed that this can reduce the surface unevenness to 5 nm or less.

Luminous intensity of incident light (lx)

Ave

rag

e o

utp

ut

dig

ita

l va

lue

96dB

16bit

105

104

103

102

101

1

10410310210110.1

Figure 6-2. Input/output characteristics of prototype device

Multiplier film

Pixel electrode

Pixel

Light

Insulating layer

Electric charge

Color filter

CMOS circuit

Figure 6-3. Structure of solid-state image sensor overlaid with multiplier film

(a)

Glass substrate

Transparent electrode

Tellurium

Crystalline selenium

Transparent electrodeGallium oxide

0 5 10 15

New tellurium

deposition process

Conventional tellurium

deposition process

(b)

Light

Room temperature (25°C)

Voltage (V)

Da

rk-c

urr

en

t d

en

sity (

pA

/cm

2)

105

104

103

102

101

100

Figure 6-4. Structure (a) and dark-current characteristics (b) of element for evaluating

multiplying effect of crystalline selenium film

Page 45: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 43

6  Devices and materials for next-generation broadcasting

■Organic photoconductive fi lm for single-chip cameras with high S/N

We are conducting research on organic image sensors with the goal of realizing a compact single-chip color camera that is small, lightweight and highly mobile. These sensors consist of stacked layers of three different organic photoconductive fi lms (organic fi lms) sensitive to each of the three primary colors of light. The electrodes of organic image sensors that sandwich each organic fi lm must be transparent in order to transmit inci-dent light into lower layers of the stacked organic films. We previously improved the performance of organic fi lms for each color and achieved a quantum effi ciency of 80% with a trans-parent cell in which an organic film for green is sandwiched between transparent electrodes. In FY 2017, we increased the effi ciency of transparent cells for blue and red.

For the transparent cell for blue, we selected a new hole-transport material having high durability as a photoelec-

tric conversion material and added 5% of the electron-transport fullerene C60 to promote the separation of electron-hole pairs, which are produced by photoabsorption. We also applied elec-tron beam evaporation, which can suppress damage on organ-ic fi lms, as a method for forming a transparent electrode on an organic fi lm. Our prototype transparent cell achieved a maxi-mum quantum effi ciency of 77% when a voltage of about 6 V was applied (Figure 6-5)(3). The research on the transparent cell for blue was conducted in cooperation with Nippon Kayaku Co., Ltd.

We also re-examined the materials for the cell for red and developed a transparent cell using boron subnaphthalocyanine, which exhibits both electron and hole transport, as a photo-electric conversion material. The transparent cell achieved a maximum quantum effi ciency of 80% in the red region when a voltage of about 11 V was applied (Figure 6-5)(4). This means that we have achieved high effi ciency for transparent cells for all three primary colors of light, including the one for green that we previously developed.

[References](1) M. Goto, Y. Honda, T. Watabe, K. Hagiwara, M. Nanba, Y. Iguchi, T.

Saraya, M. Kobayashi, E. Higurashi, H. Toshiyoshi and T. Hiramoto: “Fabrication of Three-Dimensional Integrated CMOS Image Sensors with Quarter VGA Resolution by Pixel-Wise Direct Bonding Technol-ogy,” 30th International Microprocesses and Nanotechnology Con-ference (MNC 2017), 9A-9-2 (2017)

(2) S. Imura, K. Mineo, K. Miyakawa, H. Ohtake and M. Kubota: “High-efficiency and low dark current crystalline selenium-based heterojunction photodiode with a high-quality tellurium nucleation layer,” Proc. of the IEEE Sensors 2017, B3L-C, pp.1140-1142 (2017)

(3) T. Takagi, Y. Hori, T. Sakai, T. Shimizu, H. Ohtake and S. Aihara: “Characteristic improvement in blue-sensitive organic photoconduc-tive film sandwiched between transparent electrodes,” ITE Winter Annual Conference, 22C-2 (2017) (in Japanese)

(4) T. Takagi, Y. Hori, T. Sakai, T. Shimizu, H. Ohtake and S. Aihara: “Fabrication of Transparent-Type Red Sensitive Organic Photocon-ductive Cell with High Quantum Effi ciency,” Ext. Abstr. of the 65th JSAP Spring Meet., 20p-A204-6 (2018) (in Japanese)

6.2 Advanced storage technologies

■Multilevel holographic memory

Storing 8K video for a long time requires a storage system for video archiving that has a very high transfer rate and large capacity. We have been researching holographic memory to meet these requirements. In FY 2017, in our work on elemental technologies for multilevel recording, we developed a decoding method for reproduced data based on machine learning and an error correction code technology. We also developed a multi-level recording reproduction technology using amplitude mod-ulation as a modulation scheme suited for four-level modula-tion.

Holographic memory uses laser beams to record and repro-duce a data page in which symbol pixels consisting of dark pix-els and bright pixels are arranged in two dimensions. Focusing on the fact that a data page is a kind of image, we developed a decoding technology using convolutional neural network (Fig-ure 6-6). This technology reduces the number of bit errors in reproduced data by using preliminary machine learning of the rules of modulation schemes and the causes of optical system errors such as lens aberration(1). We demonstrated that this method reduces the number of bit errors by 60% compared with a conventional method of determining data of a two-level modulation scheme using a threshold value.

Since an error correction code for four-level modulation re-quires a higher correction capability than one for two-level

modulation, we developed a spatially coupled low-density pari-ty check (LDPC) code dedicated to multilevel holographic mem-ory(2). Verifi cation of the performance of this code through sim-ulations demonstrated that the code can correct errors if the bit-error rate before correction data page is 1.3×10-2 or less.

As a way to create a data page of four-level modulation, we developed a 10:9 modulation code that assigns bright pixels of three different luminance levels to three out of nine symbols. This method, which sets a standard luminance pixel to any of the three symbols, has the feature of high tolerance against the brightness non-uniformity that occurs at the time of recording to and reproduction from the recording media. We evaluated the recording and reproduction by a holographic memory opti-cal system using this method and confi rmed that data can be reproduced with a bit-error rate that can be corrected by a spa-tially coupled LDPC code.

■Magnetic high-speed recording devices utilizing magnetic nanodomains

With the goal of realizing a high-speed magnetic recording device with no moving parts and a high reliability, we are de-veloping a recording device that utilizes the high-speed-motion characteristics of nanosize magnetic domains in magnetic na-nowires. In FY 2017, we developed fundamental technologies for further increasing the driving speed of recorded magnetic

400 500 600 700 8000

20

40

60

80

100

Wavelength (nm)

For blue For red

Qu

an

tum

eff

icie

ncy (

%)

Figure 6-5. Quantum efficiency of transparent cells for blue and red

Page 46: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

44 | NHK STRL ANNUAL REPORT 2017

6  Devices and materials for next-generation broadcasting

domains.In our quest for a device that enables the high-speed driving

of magnetic domains, we previously employed an artifi cial su-perlattice structure that has an ultrathin ruthenium (Ru) inter-layer between cobalt (Co)/palladium (Pd) multilayered films. This structure is effective for reducing magnetization, which hinders high-speed driving, and achieved magnetic domain driving at 1 m/s. In FY 2017, we began fabricating and evaluat-ing a cobalt/terbium (Tb) multilayered fi lm as a new material for even higher-speed driving(3). Since terbium is weakly mag-netized in the opposite direction to that of cobalt, the net mag-netization of a multilayered fi lm stacking these elements equals the difference in magnetization between cobalt and terbium, which means that a further reduction in magnetization is pos-sible. We also fabricated a magnetic nanowire structure having a thin platinum (Pt) layer stacked on this cobalt/terbium multi-layered fi lm and found that this structure can improve the driv-ing speed of magnetic domains signifi cantly by using the spin Hall effect produced by platinum (Figure 6-7(a)). In particular, magnetic nanowires with a structure of Pt (3 nm)/[Co (0.3 nm)/Tb (0.6 nm)]5 cycles achieved magnetic domain driving at 15 m/s, more than 10 times that of previous [Co/Pd] nanowire.

As elemental technologies for the high-speed driving of magnetic nanodomains, we increased the bandwidth of the signal preamplifi er system of a magnetic recording head that is used to detect magnetic domains in magnetic nanowires. When driving magnetic domains at high speed, conventional direct-current amplifi ers have diffi culty in tracking and detect-ing changes of their magnetization direction. To address this problem, we prototyped a new recording and reproduction evaluation unit equipped with the precise control of a cur-rent-induced magnetic fi eld during recording and the capability to capture radio-frequency signals up to about 1.6 GHz with

low noise by series-connecting a head preamplifi er for a hard disk drive with the input/output unit of the magnetic recording head. We also developed evaluation equipment that can detect the motion of magnetic domains during driving in real time by using a magneto-optical Kerr effect microscope to enable the evaluation of the shape of driving magnetic domains (Figure 6-7(b)) and we evaluated the driving speed.

To further increase the driving speed of magnetic domains in magnetic nanowires, we conducted simulations using the Lan-dau–Lifshitz–Gilbert (LLG) equation, which describes magneti-zation dynamics and damping in general magnetic materials, to investigate the effect of applying a magnetic fi eld on assist-ing magnetic domain driving. The results showed that the driv-ing speed can be almost doubled by applying a local static magnetic fi eld that has only the component of the in-plane di-rection in magnetic nanowires to the immediate proximity of the magnetic domain wall simultaneously with the application of pulse currents, compared with that in the conventional driv-ing method using only pulse currents(4).

[References](1) Y. Katano, T. Muroi, N. Kinoshita and N. Ishii: “Image Recognition

Demodulation Using Convolutional Neural Network for Holographic Data Storage,” Tech. Dig. ISOM’17, Tu-G-04, pp.35-36 (2017)

(2) N. Ishii, Y. Katano, T. Muroi and N. Kinoshita: “Spatially coupled low-density parity-check error correction for holographic data stor-age,” Jpn. J. Appl. Phys., 56, pp. 09NA03-1–09NA03-4 (2017)

(3) M. Okuda, M. Kawana, Y. Miyamoto and N. Ishii: “Precise Control of Current Driven Domain Wall Motion by Diphasic Current Pulses,” MMM 2017, EC-08, pp.449-450 (2017)

(4) M. Kawana, M. Okuda, Y. Miyamoto and N. Ishii: “Estimation on cur-rent-driven domain wall motion in magnetic nanowire by use of magnetic field assist,” TMRC 2017, DP-11, pp.175-176 (2017)

… …

Reproduced data pageConvolutional neural network

Select

Figure 6-6. Decoding using convolutional neural network

ElectrodeElectrodes

Multiple domains move to the right

Downward domain (Dark region)

Upward domain

(Bright region)

Pt

[Co/Tb]

(a) Concept of spin Hall effect produced by platinum

Magnetic nanowire (40 μm long)

(b) State of magnetic domain driven by current

(5) After applying four times

(4) After applying three times

(3) After applying twice

(2) After applying 500-ns single pulse once

(1) Initial state(1) Torque generated by spin is

transmitted from Pt to [Co/Tb]

layer

(2) Magnetic moment rotates to

move magnetic domains

Figure 6-7. Concept of magnetic domain driving assisted by spin Hall effect and state of magnetic domain driven by current

Page 47: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 45

6  Devices and materials for next-generation broadcasting

6.3 Next-generation display technologies

■Flexible OLED displays with longer lifetime and higher color purity

Organic light-emitting diode (OLED) devices use active mate-rials such as alkali metals for their electron injection layer. Since these materials are sensitive to moisture and oxygen, the devices deteriorate over time when used on a fl exible substrate such as a plastic fi lm. This poses the greatest challenge in real-izing a fl exible OLED display. To address this issue, we are de-veloping an OLED that does not use alkali metals and can bet-ter withstand oxygen and moisture, called an inverted OLED. In FY 2017, we developed a technology for fabricating inverted OLEDs at a low temperature to realize a fl exible OLED with a longer lifetime. Conventional inverted OLEDs use zinc oxide, which requires heat treatment at 400°C, for their electron injec-tion layer. This makes it diffi cult to apply an inverted OLED to a versatile plastic fi lm substrate, which does not have suffi cient heat resistance. We therefore employed zinc oxide nanoparti-cles mixed with a tin compound, which can be formed at a low temperature of 120°C, for the electron injection layer and real-ized a practical inverted OLED with a luminance half-life of 10,000 hours or more. This showed the feasibility of its applica-tion to fi lm substrates(1).

While it had been known that using phosphorescent materi-als for an OLED’s light-emitting layer can produce a high emit-ting effi ciency, the design criteria for materials for a device with a longer lifetime had not been systematically understood. We therefore used thermally activated delayed fl uorescence (TADF) materials having a similar molecular structure for the host ma-terials of the light- emitting layer and compared and analyzed their lifetime characteristics to search for the design criteria for host materials appropriate for a device with a longer lifetime. The results showed that the use of TADF materials having a smaller molecular size can realize a device with a longer life-time(2).

To reproduce a wide color range of SHV on an OLED display, it is necessary to develop a power-saving green OLED with a high color purity. To meet this requirement, we used a platinum

complex having a rigid network molecular structure as the lu-minescent material and employed a top-emission structure that takes light out of the upper electrode. This realized a green OLED having a color purity with x-y chromaticity coordinates of (0.18, 0.74)(3) (Figure 6-8).

■Technologies for increasing image quality and lowering driving power consumption

We are conducting R&D on improving the performance of high-mobility oxide TFTs and driving and signal processing technologies to increase the image quality and lower the power consumption of sheet-type OLED displays. In FY 2017, we de-veloped a technology for shortening the channel length of ox-ide TFTs. We found that a TFT channel partly changes to a con-ductor when hydrogen is injected into a oxide semiconductor layer that uses In-Ga-Sn-O (IGTO). By taking advantage of this phenomenon, we successfully shortened the channel length of TFTs(4). We confi rmed that this method can shorten the channel length to 1.4 μm. This channel-shortening technology was de-veloped in cooperation with Kobe Steel, Ltd.

As a signal processing technology for increasing the image quality of OLED displays, we studied a method for controlling the driving power and display luminance for displaying HDR video and conducted simulations to verify the effects. When displaying HDR video with a high average luminance on an OLED display, conventional technologies had a problem of tone degradation in dark regions because they suppress the lumi-nance of signal levels uniformly to limit the power. To address this problem, we devised a method for suppressing the driving power while maintaining the tone representation of dark and light regions and confi rmed its effectiveness using evaluation images(5).

■Solution-processed devices for large fl exible displays

With the goal of realizing a large fl exible display that is thin,

Platinum complex + top-emission structure

Platinum complex + conventional structure

500 6000

0.2

0.4

0.6

0.8

1

Wavelength (nm)

Lu

min

esce

nce

in

ten

sity

Conventional device

Narrow = High-color-purity green

Conventional HDTV color standard

4K/8K color standard

(x:0.18, y:0.74)

y

x

High-color-purity

green

Figure 6-8. Chromaticity diagram and spectrum of developed OLED device

Page 48: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

46 | NHK STRL ANNUAL REPORT 2017

6  Devices and materials for next-generation broadcasting

lightweight and rollable, we are conducting R&D on oxide TFTs that can be fabricated by solution process without using a large vacuum chamber, which is necessary for panel production, and on electroluminescent devices using quantum dots (QDs) called quantum dot light-emitting diodes (QD-LEDs).

In our work on solution-processed oxide TFTs, we developed a method for fabricating TFTs more easily and simply by taking advantage of a solution process. Previously, TFTs were fabri-cated by a complicated technique such as photolithography us-ing photoreactive organic materials. We successfully simplifi ed this fabrication process by developing a method for patterning oxide semiconductors by a direct photoreaction. The fabricated TFTs by this new method showed electrical performance com-parable to that of TFTs fabiricated by a conventional method(6). This result indicates that this method is effective for realizing an inexpensive large display.

A QD-LED, which is a luminescent device using QDs as lumi-nescent materials. QDs consist of semiconductor nanocrystals with a size of about 10 nm, can control the wavelength and full width at half maximum of the emission spectrum by using the capability of grain size control. In FY 2017, we prototyped a QD-LED that uses a zinc sulfi de-silver sulfi de indium solid solu-tion (ZAIS) as low-toxicity QD material. ZAIS QDs can change the luminescence wavelength by controlling the elemental composition ratio as well as the grain size. Our QD-LED using ZAIS QDs emitted red light with an external quantum effi ciency of 1.9% (7). The research on the QD-LED using ZAIS was con-ducted in cooperation with Nagoya University and Osaka Uni-

versity.

[References](1) T. Sasaki, H. Fukagawa, T. Shimizu, Y. Fujisaki and T. Yamamoto:

“Improved operational stability of inverted organic light-emitting di-odes using Sn-doped zinc oxide nanoparticles as an electron injec-tion layer,” Proceedings of EuroDisplay 2017

(2) H. Fukagawa, T. Shimizu, Y. Iwasaki and T. Yamamoto: “Operational lifetimes of organic light-emitting diodes dominated by Forster reso-nance energy transfer,” Scientifi c Reports, DOI:10.1038/s41598-017-02033-3 (2017)

(3) T. Oono, Y. Iwasaki, T. Hatakeyama, T. Shimizu and H. Fukagawa: “Demonstration of Efficient Green OLEDs with High Color Purity,” SID Digest, pp. 853–856 (2017)

(4) M. Nakata, M. Ochi, H. Tsuji, T. Takei, M. Miyakawa, Y. Fujisaki, H. Goto, T. Kugimiya and T. Yamamoto: “Fabrication of a Short-Channel Oxide TFT Utilizing the Resistance-Reduction Phenomenon in In-Ga-Sn-O,” SID 2017 Digest, pp. 1227–1230 (2017)

(5) T. Yamamoto, T. Okada, T. Usui and Y. Fujisaki: “Picture Level Control Method for Super Large-Area Display,” IDW’17 VHF6-1, pp. 1028-1031 (2017)

(6) M. Miyakawa, M. Nakata, H. Tsuji and Y. Fujisaki: “Direct Photoreac-tive Patterning Method for Fabricating Aqueous Solution-Processed IGZO TFTs,” IDW’17 AMD3-2, pp. 336-339 (2017)

(7) G. Motomura, T. Tsuzuki, T. Kameyama, T. Torimoto, T. Uematsu, S. Kuwabata and T. Yamamoto: “Fabrication of Low Toxic Quantum Dot Light-Emitting Diode using ZnS-AgInS2,” ITE Annual Convention 2017, 32C-1 (2017) (in Japanese)

Page 49: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 47

NHK STRL promotes the use of its research results on 8K Super Hi-Vision and other technologies in sev-eral ways, including through the NHK STRL Open House, various exhibitions and reports. It also works to develop technologies by forging links with other organizations and collaborating in the production of pro-grams.

We contributed to domestic and international standardization activities at the International Telecom-munication Union (ITU), Asia-Pacifi c Broadcasting Union (ABU), Information and Communications Council of the Ministry of Internal Affairs and Communications, Association of Radio Industries and Businesses (ARIB) and various organizations around the world. We also promoted Japan’s terrestrial digital broad-casting standard, ISDB-T (Integrated Services Digital Broadcasting - Terrestrial), by participating in activi-ties at the Digital Broadcasting Experts Group (DiBEG) and the International Technical Assistance Task Force of ARIB.

The theme of the FY 2017 NHK STRL Open House was “Evolving Broadcast Technology 2020 and Be-yond.” It featured 30 exhibits on our latest research results such as image representation technologies for conveying the excitement of sports to the world, “Smart Production” technologies that support program production by using AI and big-data analysis, broadcasting services utilizing the internet, 3D television and broadcasting device technologies. The event also had 10 poster exhibits and four interactive exhibits and was attended by 20,194 visitors. We also held 17 exhibitions in Japan and overseas.

We conducted 70 tours of our laboratories for 1,200 visitors. Twenty-six of these tours were for visitors from overseas.

We published 578 articles describing NHK STRL research results in conference proceedings and journals within and outside Japan and issued eight press releases. We continued to consolidate our intellectual property rights by submitting 305 patent applications and obtaining 277 patents. As of the end of FY 2017, NHK held 2,012 patents.

We are also cooperating with outside organizations. Last year, we participated in 21 collaborative re-search efforts and three commissioned research efforts. We hosted one visiting researcher and 17 train-ees. We also dispatched six of our researchers overseas.

The equipment resulting from our research was used in the production of NHK television programs. For example, a system that estimates the head pose of a player in soccer game footage, our “sword tracer” system that tracks the movements of the tip of a sword in fencing and our system that calculates the 3D position of the ball and displays it with a CG in beach volleyball were used in sports programs. Other technologies were also used for program production, such as an 8K 2x-speed slow-motion technology used in dramas and a technology for automatically generating summarization of posted videos used in live broadcast programs. In FY 2017, NHK STRL collaborated with the parent organization in making 23 programs. Finally, in recognition of our research achievements, NHK STRL received a total of 32 awards in FY 2017, including the Maejima Award and the Takayanagi Memorial Award.

7 Research-related work

7.1 Joint activities with other organizations

■Participation in standardization organizations

NHK STRL is participating in standardization activities at in-ternational and domestic standardization organizations and projects, mainly related to broadcasting. In particular, we are contributing to the development of technical standards that in-corporate our research results.

We have made a number of contributions to the ITU Radio-communication Sector (ITU-R). As part of Study Group 4 (SG 4) for satellite services, we proposed that the specifi cations of the power fl ex density be changed to allow increased satellite out-put of left-hand circular polarization channels for the larger transmission capacity of 4K/8K satellite broadcasting and the issues was agreed for inclusion in the agenda for World Radio-communication Conference 2019 (WRC-19). At Study Group 5 (SG 5) for terrestrial services, we refl ected the specifi cations of a 42-GHz-band fi eld pick-up unit (FPU) for the wireless trans-mission of 4K/8K program contributions in a Recommendation and a Report. As part of Study Group 6 (SG 6) for broadcasting services, we submitted a number of contributions on various subjects including operational guidelines and test signals for the program production of high-dynamic-range television

(HDR-TV), high-resolution VR with 8K display, a transmission method for audio metadata, an integrated platform for broad-casting and telecommunications networks, an integrated broadcast-broadband system utilizing second screens and 8K transmission experiments over terrestrial broadcasting net-works, and our proposals were incorporated into Recommen-dations and Reports.

At the Moving Picture Experts Group (MPEG), which is a working group of a joint committee of the International Organ-ization for Standardization (ISO) and the International Electro-technical Commission (IEC), we analyzed coding control for applying High Effi ciency Video Coding (HEVC), which is the vid-eo coding scheme of 4K/8K satellite broadcasting, to high-dy-namic-range video. Our analysis was refl ected in the common evaluation condition for the standardization of next-generation video coding. Regarding the media transport scheme, we con-tributed to the establishment of the MPEG Media Transport (MMT) implementation guidelines. We also contributed to the standardization of a new 3D video coding technology at the MPEG-I (Immersive) Visual ad hoc group by proposing a meth-

Page 50: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

48 | NHK STRL ANNUAL REPORT 2017

7  Research-related work

■ Leadership activities at major standardization organizations

■ International Telecommunication Union (ITU)

Committee name Leadership role

International Telecommunication Union Radiocommunication Sector (ITU-R)

Study Group 6 (SG 6, Broadcasting services) Chair

■Asia-Pacifi c Broadcasting Union (ABU)

Committee name Leadership role

Technical committee Chair

■ Information and Communications Council of the Ministry of Internal Affairs and Communications

Committee name Leadership role

Information and communications technology subcommittee

ITU section Expert member

Spectrum management and planning committee Expert member

Radio-wave propagation committee Expert member

Satellite and scientific services committee Expert member

Broadcast services committee Expert member

Terrestrial wireless communications committee Expert member

■Association of Radio Industries and Businesses (ARIB)

Committee name Leadership role

Technical committee

Broadcasting international standardization working group Chair

Digital broadcast systems development section Committee chair

Multiplexing working group Manager

Download methods TG Leader

Data MMT transmission JTG Leader

Video coding working group Manager

Data coding working group Manager

Copyright protection working group Manager

Digital receivers working group Manager

Ultra-high-definition TV broadcast receivers TG Leader

Digital satellite broadcasting working group Manager

Advanced satellite broadcasting demonstration experiments TG

Leader

Mobile multimedia broadcasting systems working group Manager

Digital terrestrial broadcasting channel coding working group Manager

Studio facilities development section

Studio sound working group Manager

Next-generation sound services study WG Leader

Broadcast contribution file format study working group

Data content exchange methods JTG Leader

Sound quality evaluation methods working group Manager

Contribution transmission development section

Terrestrial wireless contribution transmission working group Manager

Millimeter-wave contribution transmission TG Leader

Microwave UHDTV-FPU study TG Leader

Promotion strategy committee

Digital broadcasting promotion sub-committee

Digital broadcasting experts group (DiBEG)

International technical assistance task force Manager

Next-generation broadcast study task force assisting Japan-Brazil joint work section, etc.

Manager

■Telecommunication Technology Committee (TTC)

Committee name Leadership role

Multimedia application working group

IPTV-SWG Leader

od for converting elemental images of integral 3D images to multi-viewpoint images without losing data.

At the Society of Motion Picture and Television Engineers (SMPTE), we contributed to the preparation of draft standards by proposing a method for multiplexing metadata used for ob-ject-based audio over AES3, which is the standard for digital audio signal transmission.

As part of the Asia-Pacific Broadcasting Union (ABU), we presented our R&D efforts for 4K/8K broadcasting and us-er-friendly broadcasting at the technical committee meeting held in Chengdu, China. We also contributed as a project man-ager to discussions on topics such as metadata technology for program production, next-generation terrestrial broadcasting, hybrid broadcasting and OTT technologies. Also at the general meeting held in the same venue, Mr. Ueda, President of NHK, was elected as vice chairman of the ABU with a tenure of three years starting January 2018.

In addition to the above activities, we engaged in a number of standardization activities, including the European Broadcast-ing Union (EBU), the Advanced Television Systems Committee (ATSC), which standardizes TV broadcasting systems in the U.S., the Audio Engineering Society (AES), the 3rd Generation Partnership Project (3GPP), which discusses standards for next-generation mobile communications, the Advanced Media Workflow Association Networked Media Incubator (AMWA NMI), which standardizes connection management methods of IP program production systems, the World Wide Web Consorti-um (W3C), which develops Web standards including HTML5 that is used for describing content delivered through broadcast-ing and telecommunications, the Association of Radio Indus-tries and Businesses (ARIB), the Japan Electronics and Informa-tion Technology Industries Association (JEITA) and the Telecommunication Technology Committee (TTC) of Japan.

■Collaboration with overseas research facilities

We participated in sub-groups (Renderer, Augmented Reality (AR), Artifi cial Intelligence (AI), 5G) of the Broadcast Technolo-gy Futures (BTF) group under the technical committee of the European Broadcasting Union (EBU) and shared information and exchanged opinions with BTF members.

For research on IP-based program production systems, we continued to participate in activities at the AMWA NMI, which

standardizes equipment control systems under the leadership of BBC R&D. We took part in a test on a prototype device equipped with the specifi cations currently being formulated to verify the specifications and interconnections of devices that are being developed by member companies and confi rmed its successful operation.

We participated in the standardization activities of the

Page 51: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 49

7  Research-related work

framework for interoperable media services (FIMS), which is a mechanism for a future fl exible content production infrastruc-ture that the AMWA and EBU are jointly standardizing. The ac-tivities led to the release of FIMS Ver. 1.3, which specifi es com-mon APIs for automatic metadata extraction. On the basis of

this, we implemented our shot partitioning, face detection and character string detection technologies as FIMS-compliant ser-vices and conducted demonstrations at exhibitions such as the International Broadcasting Convention (IBC) 2017.

■Collaborative research and cooperating institutes

In FY 2017, we conducted a total of 21 collaborative research projects and 35 cooperative research projects on topics ranging from system development to materials and basic research.

We collaborated with graduate schools in eight universities (Chiba University, the University of Electro-Communications,

Tokyo Institute of Technology, Tokyo Denki University, Tokyo University of Science, Toho University, Tohoku University and Waseda University) on education and research through activi-ties such as sending part-time lecturers and accepting trainees.

■Visiting researchers and trainees and dispatch of STRL staff overseas

We hosted one visiting researcher from the BBC (UK) to hon-or our commitment to information exchange with other coun-tries and the mutual development of broadcasting technolo-gies. We also took on one post-doctoral research project.

We provided guidance to a total of 17 trainees from eight universities (Waseda University, Tokyo University of Science, the University of Electro-Communications, Tokai University,

Nagaoka University of Technology, Tokyo City University, Tokyo Denki University and the Tokyo University of Agriculture and Technology) in their work towards their Bachelor’s and Mas-ter’s degrees.

Six STRL researchers were dispatched to research institu-tions in the United States, Spain, Belgium and Australia.

■Visiting researchers

Type Term Research topic

Visiting researcher 2017/2/1 to 2017/10/6 High-efficiency coding technology for 8K video

Post-doctoral student From 2017/9/1 High-speed and high-precision phase detection technology for hologram reproduction

■Dispatch of NHK STRL researchers overseas

Location Term Research topic

Universitat Pompeu Fabra, Spain

2016/11/7 to 2017/5/6 Next-generation video coding technology

University of California, USA 2017/3/1 to 2017/8/31 New video experience applying AR and other technologies

University of Massachusetts Boston, USA

2017/9/10 to 2018/3/10 Spatial information modeling based on people’s stereognostic features

IMEC International, Belgium From 2017/9/25High-pixel-density and high-functionality imaging sensor using cutting-edge semiconductor manufacturing technology

The University of Melbourne, Australia

From 2017/10/24 Cutting-edge research on natural language processing

MIT Media Lab, USA. From 2018/1/21 Cooperative interactive content production by multiple people using 8K displays

■Commissioned research

We are participating in research and development projects with national and other public facilities in order to make our research on broadcast technology more effi cient and effective. In FY 2017, we took on three projects from the Ministry of In-ternal Affairs and Communications.

• R&D on Highly Effi cient Frequency Usage for the Next-Gen-

eration Program Contribution Transmission• Research and Development for Advanced Digital Terrestrial Television Broadcasting System

• Research and Development on SFN Relay Technologies for Advanced Digital Terrestrial Television Broadcasting System

■Committee members, research advisers, guest researchers

We held two meetings of the broadcast technology research committee and received input from academic and professional committee members. We held 14 sessions to obtain input from research advisers. We also invited researchers from other or-

ganizations to work on fi ve research topics with us.

Page 52: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

50 | NHK STRL ANNUAL REPORT 2017

7  Research-related work

7.2 Publication of research results

■STRL Open House

The NHK STRL Open House 2017 was held over six days from May 23 under the theme of “Evolving Broadcast Technol-ogy 2020 and Beyond.” It featured 30 exhibits on our latest re-search results such as image representation technologies for

conveying the excitement of sports to the world, “Smart Pro-duction” technologies that support program production by us-ing AI and big-data analysis, broadcasting services utilizing the internet, 3D television and broadcasting device technologies.

■Broadcast Technology Research Committee Members March 2018

** Committee chair, * Committee vice-chair

Name Affi liation

Kiyoharu Aizawa** Professor, University of Tokyo

Tadashi Ito Senior Vice President, NTT Information Network Laboratory Group

Toshiaki Kawai Executive Director and Chief Engineer, Tokyo Broadcasting System Television Inc.

Yasuhiro Koike Professor, Keio University

Tetsunori Kobayashi Professor, Waseda University

Yasushi Sakanaka Director, Ministry of Internal Affairs and Communications

Yoichi Suzuki Professor, Tohoku University

Junichi Takada Professor, Tokyo Institute of Technology

Atsushi Takahara Professor, Kyushu University

Fumihiko Tomita* Vice President, National Institute of Information and Communications Technology (NICT)

Yasuyuki Nakajima President/CEO, KDDI Research, Inc.

Yasumasa Nakata Executive Vice President, Fuji Television Network, Inc.

Ichiro Matsuda Professor, Tokyo University of Science

Yukinobu Miki Senior Vice-President, National Institute of Advanced Industrial Science and Technology (AIST)

Masayuki Murata Professor, Osaka University

■Research Advisers March 2018

Name Affi liation

Makoto Ando Executive Vice President, Tokyo Institute of Technology

Makoto Itami Professor, Tokyo University of Science

Susumu Itoh Professor, Tokyo University of Science

Tohru Ifukube Emeritus Professor, University of Tokyo

Hideki Imai Emeritus Professor, University of Tokyo

Tatsuo Uchida Emeritus Professor, Tohoku University

Juro Ohga Emeritus Professor, Shibaura Institute of Technology

Tomoaki Ohtsuki Professor, Keio University

Jiro Katto Professor, Waseda University

Yoshimasa Kawata

Professor, Shizuoka University

Satoshi Shioiri Professor, Tohoku University

Takao Someya Professor, University of Tokyo

Fumio Takahata Professor, Waseda University

Katsumi Tokumaru

Emeritus Professor, University of Tsukuba

Mitsutoshi Hatori Emeritus Professor, University of Tokyo

Takayuki Hamamoto

Professor, Tokyo University of Science

Hiroshi Harashima

Emeritus Professor, University of Tokyo

Takehiko Bando Emeritus Professor, Niigata University

Takefumi Hiraguri Professor, Nippon Institute of Technology

Masato Miyoshi Professor, Kanazawa University

■Guest Researchers March 2018

Name Affi liation

Mamoru Iwabuchi Associate Professor, University of Tokyo

Tokio Nakada Project Researcher, Tokyo University of Science

Kazuhiko Fukawa Professor, Tokyo Institute of Technology

Toshiaki Fujii Professor, Nagoya University

Tetsuya Watanabe Associate Professor, Niigata University

Entrance Sports graphics system

Page 53: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 51

7  Research-related work

■Keynote speeches

Title Speaker

Television for 2020 and Beyond Gota Iwanami, President, INFOCITY, Inc.

VR, AR, UHD+... television in 50 year’s time – can we predict it today? David Wood, Consultant, EBU Technology and Innovation

■Research presentations

Title Speaker

Face Recognition Technique for TV Program Video Yoshihiko Kawai, Internet Service Systems Research Division

R&D on Live Production System Running on IP Networks Tomofumi Koyama, Advanced Transmission Systems Research Division

Research and Development towards Practical-Use Full-featured 8K Super Hi-Vision Cameras

Tomohiro Nakamura, Advanced Television Systems Research Division

Automatic Generation of Audio Descriptions for Sports Programs Tadashi Kumano, Human Interface Research Division

Natural 3D Visualization Technology for Integral 3D Displays Based on the Characteristics of Space Perception and Cognition

Yasuhito Sawahata, Three-Dimensional Image Research Division

Development of Green Organic Light-Emitting Diode with High Color Purity Hirohiko Fukagawa, Advanced Functional Devices Research Division

■Symposiums

Title Moderator Panelists

Expanding Potential for Public Broadcasting with Artificial Intelligence

Shoei Sato, Senior Research Engineer, Human Interface Research Division, STRL, NHK

Kentaro Torisawa, Director General, Data-driven Intelligent System Research Center, National Institute of Information and Communications Technology (NICT)Jin Umeda, Principal, Life Log Lab., M DATA Co., Ltd.Nanako Ishido, President, CANVAS/Associate Professor, Keio University Graduate School of Media DesignAkihiko Nakai, Program Production Department, NHK

Redesigning Television Service for Viewers in the Internet Age

Toshio Kuramata, Senior Manager, Digital Content Center, NHK

Masataka Yoshikawa, General Manager, Institute of Media Environment, Hakuhodo DY Media Partners Inc.Kiyoyasu Ando, President, HAROiD Inc.Hisaya Suga, Chief Executive Officer, PRESENTCAST Inc.Toshio Nakagawa, Head of Internet Service Systems Research Division, STRL, NHK

Delving into What Makes Full-Featured 8K Super Hi-Vision so Special

Yukihiro Nishida, Executive Research Engineer, Advanced Television Systems Research Division, STRL, NHK

Yutaka Imai, Chief Researcher, Service Development, SKY Perfect JSAT CorporationAndy Quested, Standards Lead, BBCAtsushi Ochiai, Senior Manager, Media Planning Bureau, NHK

■Research exhibits

1AI-Driven Smart Production

11Face Recognition Technology for TV Program Video

21Smart FPU Broadening Possibilities of Outside Broadcasts

2Enriching Daily Activity by TV Content Connecting with IoT

12Automatic Generation of Sign Language CG Animation for Sports Programs

228K Super Hi-Vision FPU

3Media-Unifying Platform

13Automatic Audio Description from Sports-Related Data for Live Broadcasting

23Next Generation Terrestrial Broadcasting Systems

4New Live Sports Graphics System Powered by Realtime Object Tracker

14Services and Technologies to Bridge Content and Daily Activity

24Parallel-Type Integral 3D Display

5Internet Service Technologies for Realizing Interactive Content Viewing

15TV-Watching Robot

25Fundamental Technologies for Integral 3D Television

6Audio Description Services for Sports Programs

16Privacy Preserving System for Secure Integrated Broadcast-Broadband Services

26Organic Light-Emitting Diode with High Color Purity

7Sheet-Type Display with High Frame Rate

17Advancing 8K Shooting and Recording Technology

27Three-Dimensional Integrated Imaging Device

8Full-Featured 8K Program Production System

18Acoustic Devices for 22.2ch Sound

28Magnetic Nanowire Memory Aiming at a High-Speed Recording Device

98K Laser Projector

19UHDTV Camera for HDR/SDR Simultaneous Production and Improvement of Color Reproduction

29Metadata Acquisition Technology of Graphics System for Live Sports Programs

10Speech Recognition for Smart Production

20Program Production System Running on IP Network

30Ultra-Directional Microphone

The event also had 10 poster exhibits and four interactive ex-hibits and was attended by 20,194 visitors. In the entrance hall, we exhibited our “sports graphics system” that synthesizes CGs of the ball trajectories and speeds on the screen in real time to offer visitors a glimpse of the future of sports broadcasting. Two keynote speeches on the future of broadcasting and six re-search presentations were delivered in the auditorium. Also,

three symposiums were held to discuss AI, the convergence of the internet and broadcasting, and full-featured 8K Super Hi-Vision.

Schedule May 23 (Tuesday) Opening ceremony May 24 (Wednesday) Open to invitees May 25 - June 28 (Thursday to Sunday) Open to the public

Page 54: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

52 | NHK STRL ANNUAL REPORT 2017

7  Research-related work

■Exhibitions in Japan

Throughout the year, NHK broadcasting stations all over Ja-pan hosted events and exhibitions of the latest broadcast tech-nologies resulting from our R&D. They include interactive ex-hibits so that the general public can experience and better

understand our research results. To publicize the start of 4K/8K broadcasting in 2018, we also demonstrated the reception of SHV test satellite broadcasting on various occasions and pre-sented immersive video content.

■35 exhibitions in Japan

Event name (Only major events) Dates Exhibits

Hiroshima Flower Festival (NHK Hiroshima Station) 5/3 to 5/5 CG character operation, Special effects video library, etc.

ITE Annual Conference 8/30 to 9/1 Reception of SHV test satellite broadcasting

CEATEC JAPAN 2017 10/3 to 10/6 Hybridcast Connect X

JAPAN PRIZE 2017 10/15 to 10/18 Sign language CG generation technology, Rika map (Science Map)

NHK Science Stadium 2017 10/21 to 10/22 MMT technology, Integral 3D technology

N Spo! 2017 10/28 to 10/29 Object tracking technology

NHK Osaka Station Open House “BK Wonderland” 11/3 to 11/4 Ultra-high-speed camera

Seto City Digital Festival 2017 11/5 Home companion robot to enjoy watching TV together

Inter BEE 2017 11/15 to 11/17 8K slow-motion system, Hybridcast Connect X, etc.

4K/8K Super Hi-Vision Park 12/1 to 12/3 Sheet-type OLED display

■Overseas exhibitions

The world’s largest broadcast equipment exhibition, the Na-tional Association of Broadcasters (NAB) Show 2017, was held in April. We exhibited the latest 8K technologies, including an 8K theater, 8K living room, 8K codec/FPU and a real-time MTF measuring analyzer, highlighting the steady progress of Super Hi-Vision toward its full dissemination following the start of 8K test satellite broadcasting in August 2016. We also exhibited our research outcomes of other technologies than 8K such as “Augmented TV” and audio description. The show attracted

about 103,000 visitors from around the world.The International Broadcasting Convention 2017 (IBC 2017),

the largest broadcast equipment exhibition in Europe, was held in September. We exhibited an 8K living room to give visitors an experience of a “daily life with 8K.” We also exhibited ele-mental technologies for a flexible OLED display, 8K/120-Hz production equipment, an 8K TICO codec, an 8K three-chip camera, a sports graphics system and “Hybridcast Connect X”. The convention drew about 58,000 visitors.

■ Interactive exhibits

T1 Pop-up Book Based on Integral 3DT3

Three-Dimensional Sound Reproduction with Line Array Loudspeakers T4

New Experience of Synchronous Multiple Viewing by MMT

T2 Let’s Try Virtual Reality

■Two overseas exhibitions

Event name Dates Exhibits

NAB Show 2017 (Las Vegas, USA) 4/24 to 4/27 8K theater, 8K living room, 8K codec/FPU, MTF measuring analyzer, Augmented TV, Audio description

IBC 2017 (Amsterdam, Netherlands) 9/15 to 9/19 8K living room, Flexible OLED display, 120-Hz production equipment, TICO codec, 8K three-chip camera, Sports graphics system, Hybridcast Connect X

■Academic conferences, etc.

We presented our research results at many conferences in Japan, such as the ITE and IEICE conferences, and had papers published in international publications such as IEEE Transac-tions, Optical Express, AIP Advances and Journal of the Society for Information Display.

■Poster exhibits

P1Requirements for Intelligible Audio Description

P5Spatial Light Modulator with Narrow Pixel Pitches

P9Solid-State Image Sensor Overlaid with Photoelectric Conversion Layer

P2Advanced Technology Behind 8K Shooting

P6Optical Phased Array

P10Elemental Technologies for High-Resolution Organic Image Sensors

P3Audio Coding for Next Generation Terrestrial Broadcasting Systems

P7Eco-Friendly Quantum Dot Light-Emitting Diodes

P4Movie Contents Preferred to Be Viewed in a Wide Visual Field

P8High-Performance Thin-Film Transistors for Sheet-Type Displays

Academic journals in Japan 56 papers

Overseas journals 18 papers

Academic and research conferences in Japan 232 papers

Overseas/International conferences, etc. 160 papers

Contributions to general periodicals 55 articles

Lectures at other organizations 55 events

Total 576

Page 55: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 53

7  Research-related work

■Visits, tours, and event news coverage

To promote R&D on 8K Super Hi-Vision, integral 3D televi-sion and smart production, we held tours for people working in a variety of fields including broadcasting, movies, music and academic research. We welcomed visitors from around the world, including offi cials of international broadcasting confer-

ence organizations such as IBC, broadcasters from various countries and JICA trainees.

■Press releases

We issued eight press releases on our research results and other topics.

Inspections, tours

70 (26 from overseas)1,200 visitors (242 from overseas)

News media 10 events

Dates Press release content

2017/4/6 Announcement of the STRL Open House 2017

5/23 Development of a green OLED device with high color purity

5/23 Development of a 3D object tracking system for sports graphics

5/23Development of an integral 3D capture technology using multi-viewpoint robotic cameras

Dates Press release content

5/23 Development of a VR system using an 8K display

11/8 First exhibition of NHK technologies at Inter BEE 2017

2018/1/24Announcement of the dates for the 47th Program Technology Exhibition and the 72nd STRL Open House

3/30NHK Exhibits Latest 8K Content and Production Equipment at NAB Show 2018

■Bulletins

We published bulletins describing our research activities and

achievements and special issues on topics such as a video

analysis technology and metadata, a wireless transmission

technology for Super Hi-Vision program contributions and 3D

display devices.

The Broadcast Technology journal, which is directed at over-

seas readers and features in-depth articles about our latest re-

search and trends, included articles such as “Smart Production

Technology,” “Internet-based Technology,” and “Development

of Program Production System for Full-Featured 8K Super

Hi-Vision.”

■Domestic Publications

STRL Dayori (Japanese, monthly) No.145 to No.156

NHK STRL R&D (Japanese, bimonthly) No.163 to No.168

Annual Report (Japanese, annually) FY2016 Edition

■Publications for overseas readers

Broadcast Technology (English, quarterly) No. 68 to No. 71

Annual Report (English, annually) FY2016 Edition

NHK STRL R&D Broadcast TechnologySTRL Dayori

■Website

NHK STRL website describes our laboratories and their re-search and posts reports and announcements on events such as the Open House, and the organization’s journals. We imple-mented user-friendly page designs for smartphones and tablets as well as PCs. For the better use of our website, we also added navigation pages for the general public and for researchers and engineers (user indexes) for easier access to the desired infor-mation.

User index page

Page 56: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

54 | NHK STRL ANNUAL REPORT 2017

7  Research-related work

7.3 Applications of research results

■Cooperation with program producers

Equipment resulting from our R&D has been used in many programs. Our system that estimates the head pose of a player in soccer game footage, our “sword tracer” system that tracks the movements of the tip of a sword in fencing and our system that calculates the 3D position of the ball and displays it with a CG in beach volleyball were used in sports programs. Other

technologies were also used for program production, such as an 8K 2x-speed slow-motion technology used in dramas and a technology for automatically generating summarization of posted videos used in live broadcast programs. We collaborat-ed in the production of 23 programs in FY 2017.

■Patents

We participated in the establishment of a new patent pool* for UHDTV satellite digital broadcasting standards, which was launched in July 2017. We also participated in patent pools for 2K digital broadcasting and high-effi ciency video coding stand-ards. These pools especially promote the use of patents held by NHK to help with the promotion of broadcasting services. We are protecting the rights to our broadcasting and communica-tions-related R&D as part of our intellectual property manage-ment efforts. We are also actively promoting contracts on

transfers of patented NHK technologies by enhancing our Tech-nology Catalogue, which summarizes NHK’s transferrable tech-nologies, and at events such as the STRL Open House 2017, CEATEC JAPAN 2017, Technical Show Yokohama 2018 and oth-er events we held in cooperation with local governments and other organizations.

*A mechanism that bundles licenses of multiple patents re-quired by standards under reasonable conditions.

■Prizes and degrees

In FY 2017, NHK STRL researchers received 32 prizes, includ-ing the Maejima Award and the Takayanagi Memorial Award.

Two researchers obtained a doctoral degree in FY 2017, and at the end of FY 2017, 87 STRL members held doctoral degrees.

■Patents and utility model applications submitted

Type New Total at end of FY

Domestic Patents 303 1,148

Utility models 0 0

Designs 0 0

Overseas 2 87

Total 305 1,235

■Patents and utility models in use (NHK Total)

Type New Total at end of FY

Contracts 18 289

Licenses 33 473

Patents 21 243

Expertise 12 230

■Technical cooperation (NHK Total)

Type Total

Technical cooperation projects 12

Commissioned research projects 3

■Patents and utility models granted

Type New Total at end of FY

Domestic Patents 265 1,873

Utility models 0 0

Designs 0 0

Overseas 12 139

Total 277 2,012

Award Winner Award Name Awarded by In recognition of Date

Shuichi Aoki, Yuki Kawamura, Kazuhiro Otsuki

Maejima Award Tsushinbunka Association Development and standardization of broadcasting systems using MMT technology

2017/4/11

Masahide Goto, Kei Hagiwara, Yuki Honda, Masakazu Nanba, Yoshinori Iguchi, Takuya Saraya (Univ. of Tokyo), Masaharu Kobayashi (Univ. of Tokyo), Eiji Higurashi (Univ. of Tokyo), Hiroshi Toshiyoshi (Univ. of Tokyo)

International Conference on Electronics Packaging (ICEP) 2016 Outstanding Technical Paper Award

Japan Institute of Electronics Packaging Presentation at ICEP 2016 “Three-Dimensional Integration Technology of Separate SOI Layers for Photodetectors and Signal Processors of CMOS Image Sensors”

2017/4/19

Yukihiro Nishida, Kenichiro Masaoka, Masaki Emoto, Kohei Ohmura (NHK Sapporo station), Masayuki Sugawara (NEC Corporation)

The Commendation for Science and Technology by the Minister of Education,Culture, Sports, Science and Technology, Prize for Science and Technology (Development Category)

Ministry of Education, Culture, Sports, Science and Technology

Development of ultra-high-definition television (UHDTV) system

2017/4/19

Kenji Machida, Nobuhiko Funabashi, Hidekazu Kinjo

The Ichimura Prize in Science for Excellent Achievement

The New Technology Development Foundation

Spatial light modulator for electronic holography for three-dimensional television

2017/4/26

Page 57: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHK STRL ANNUAL REPORT 2017 | 55

7  Research-related work

Takeshi Kajiyama, Kodai Kikuchi, Kei Ogura, Eiichi Miyashita, Mamoru Tecchikawahara (Tokyo Electron Device), Hiroshi Watase (Tokyo Electron Device), Yosuke Nagai (Tokyo Electron Device), Tokufumi Matsumoto (Tokyo Electron Device)

Image Information Media Future Award, Next-generation TV Technology Award

The Institute of Image Information and Television Engineers (ITE)

Development of a Compression Recorder for Full-featured 8K Super Hi-Vision

2017/4/26

Yoichi Suzuki ITU-AJ Encouragement Award The ITU Association of Japan Contribution to the establishment of a new recommendation and relevant new reports about transmission systems for UHDTV satellite broadcasting at ITU-R SG4 WP4B meetings

2017/5/17

Yoshikazu Narikiyo ITU-AJ Encouragement Award The ITU Association of Japan Proposal for a revision of an ITU-R report on 4K/8K terrestrial broadcasting efforts (BT. 2343) and ITU standardization activities including input of Rio de Janeiro 8K terrestrial transmission experiments results

2017/5/17

Yoshitaka Hakamada ITU-AJ Encouragement Award The ITU Association of Japan International standardization of 4K/8K cable TV transmission system

2017/5/17

Yuichi Kusakabe ITU-AJ Encouragement Award The ITU Association of Japan Standardization of the standard for uncompressed serial digital interfaces (Recommendation BT.2077) and the standard for image parameter values for HDR-TV (Recommendation BT.2100) including the HLG system jointly developed by Japan and the UK

2017/5/17

Shuichi Aoki Standardization Contribution Award Information Processing Society of Japan/Information Technology Standards Commissions of Japan

Contribution to the international standardization of multimedia transmission technologies as the leader of an MPEG SYSTEMS subcommittee

2017/5/23

Toru Kuroda Niwa-Takayanagi Award, Contribution Award The Institute of Image Information and Television Engineers (ITE)

Development and practical application of FM multiplex broadcasting system and terrestrial digital broadcasting system

2017/5/26

Yukihiro Nishida Niwa-Takayanagi Award, Achievement Award The Institute of Image Information and Television Engineers (ITE)

R&D and standardization of wide-color-gamut and high-dynamic-range ultra-high-definition video format

2017/5/26

Tomohiro Nakamura, Takahiro Yamasaki, Ryohei Funatsu, Hiroshi Shimamoto

Image Information Media Future Award, Frontier Award

The Institute of Image Information and Television Engineers (ITE)

Development of a 133-megapixel image sensor and full-resolution single-chip color camera system

2017/5/26

Hideki Mitsumine, Daiichiro Kato (NHK EngineeringSystem, Inc.), Kazutoshi Muto (NHK EngineeringSystem, Inc.)

Technology Promotion Award, Advanced Development Award (R&D Division)

The Institute of Image Information and Television Engineers (ITE)

The Development of the Virtual Studio System Using Hybrid Sensor

2017/5/26

Yuki Koizumi, Shoji Tanaka, KyoichiSaito, Masaaki Kojima, Yoichi Suzuki

Satellite Communications Research Award Institute of Electronics, Information, and Communication Engineers (IEICE)

“A study on the design of 64APSK constellation” reported at Technical Committee on Satellite Communications in February 2017

2017/6/8

Hiroyuki Hamazumi, Fumito Ito, Takehiko Abe (TV Asahi Corporation), Toshio Tanaka (Tamura Corporation), Masahiro Kawashima (TokyoBroadcasting System Television Inc.)

Technology Award, Encouragement Award Specified Radiomicrophone User's Federation

Development and practical application of an OFDM digital radio microphone

2017/6/9

Makiko Azuma, Tsubasa Uchida, Taro Miyazaki, Shuichi Umeda, Naoto Kato, Hideki Sumiyoshi, Yuko Yamanouchi, Hiroyuki Kaneko, Nobuyuki Hiruma (NHKEngineering System, Inc.), Seiki Inoue (NHK Media Technology, Inc.)

Hoso Bunka Foundation Awards Hoso Bunka Foundation Development of a sign language CG generation system for weather information

2017/7/4

Masanori Kano Suzuki Memorial Award The Institute of Image Information and Television Engineers (ITE)

Lecture “Calibration method for active multi-view camera that can support various camera arrangements” at 2016 Annual Meeting

2017/8/31

Tomohiro Nakamura Suzuki Memorial Award The Institute of Image Information and Television Engineers (ITE)

Lectures “Development of a Full-resolution 8K Single-chip Camera System” at 2016 Annual Meeting and “8K/120-Hz capture experiments using 133-megapixel (60-Hz) CMOS image sensor” at Winter Meeting

2017/8/31

Kazuto Ogawa Contribution Award The Institute of Electronics, Information, and Communication Engineers (IEICE)Information Processing Society of Japan

Contribution as the general co-chair of international conference IWSEC

2017/9/1

Toshihiro Yamamoto Award for Person of Cultural Merit for Tokyo Citizen

Tokyo Metropolitan Government R&D on high-definition thin TV 2017/10/2

Yutaro Katano, Tetsuhiko Muroi, Nobuhiro Kinoshita, Norihiko Ishii

The Program Chair Award International Symposium on Imaging, Sensing, and Optical Memory 2017

Image Recognition Demodulation Using Convolutional Neural Network for Holographic Data Storage

2017/10/25

Masaaki Kojima, Shoji Tanaka, Hisashi Sujikai, Yoichi Suzuki, Yuki Koizumi

JC-SAT2017 BEST PAPER AWARD The Institute of Electronics, Information, and Communication Engineers (IEICE)

Research presentation “Technology for estimating non-linear characteristics of satellite transponder” at JC-SAT2017

2017/10/27

Shoji Tanaka, Susumu Nakazawa, Kazuyoshi Shogen (Broadcasting Satellite System Corporation), Masashi Kamei (Japan Telecommunications Engineering and Consulting Service)

ABU Technical Review Prize for 2017Best Article Award

Asia-Pacific Broadcasting Union (ABU) Broadcasting Satellite Services in Terms of Increase Rate of Outage Time Caused by Rain Attenuation

2017/10/30

Makiko Azuma, Tsubasa Uchida, Taro Miyazaki, Shuichi Umeda, Naoto Kato, Hideki Sumiyoshi, Yuko Yamanouchi, Hiroyuki Kaneko, Nobuyuki Hiruma (NHKEngineering System, Inc.), Seiki Inoue (NHK Media Technology, Inc.)

Motion Picture and Television Engineering Society of Japan, Technology Development Award

Motion Picture and Television Engineering Society of Japan, Inc.

Development of a sign language CG animation automatic generation system for weather information

2017/11/1

Ryohei Funatsu, Takayuki Yamashita (Production Equipment Div., Development Center, Engineering Dept.), Kohji Mitani, Yuji Nojiri (NHK Integrated Technology Inc.)

Kanto Region Invention Award, Invention Encouragement Award

Japan Institute of Invention and Innovation

Invention of a focus assist system for high-definition cameras

2017/11/2

Kazunori Miyakawa The Telecommunications Industry Achievement Award

The Telecommunications Association (TTA)

Characteristics improvement and application area expansion of HARP, Characteristics improvement of electrochromic dimming element

2017/11/22

Shuichi Aoki International Standard Development Award Information Processing Society of Japan/Information Technology Standards Commissions of Japan

Contribution to the issuance of ISO/IEC TR 23008-13: 2017 2017/12/12

Daiichiro Kato (NHK EngineeringSystem, Inc.), Hideki Mitsumine

SI2017 Best Lecture Award The Society of Instrument and Control Engineers

Lecture “A study of 3D coordinate measurement and a track indication method of a flying object for broadcasting use” at SICE System Integration Division Annual Conference

2017/12/23

Norihiko Ishii, Taku Hoshizawa (Hitachi, Ltd.)

Kenjiro Takayanagi Memorial Award Kenjiro Takayanagi Foundation Development of a holographic memory prototype drive using wavefront compensation

2018/1/19

Kaisei Kajita SCIS Paper Award IEICE Information Security Technical Committee

Research on improving the reduction efficiency of electronic signature

2018/1/24

Kenichiro Masaoka, Yukihiro Nishida, Masayuki Sugawara (NEC Corporation), Eisuke Nakasu (NHK EngineeringSystem, Inc.), Yuji Nojiri (NHK Integrated Technology Inc.)

33rd Telecommunications AdvancementFoundation Award, Telecom System Technology Award, Encouragement Award

The Telecommunications AdvancementFoundation

Paper on the sense of reality of 8K video 2018/3/22

Page 58: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

56 | NHK STRL ANNUAL REPORT 2017

NHK Science & Technology Research Laboratories Outline

Head

■ History and near future of broadcasting development and STRL

■ NHK STRL Organization

STRL by the numbers

■ STRL Open House

■ Current research building

1930: NHK Technical Research Laboratories established

1953 : Television broadcasting begins

1964 : Hi-Vision research begins

1991 : Analog Hi-Vision broadcasting begins

1925: Radio broadcasting begins

1966 : Satellite broadcasting research begins

1982 : Digital broadcasting research begins

1989 : BS Analog broadcasting begins

1995 : 8K Super Hi-Vision research begins

2000 : BS Digital broadcasting begins

2003 : Digital terrestrial broadcasting begins

2006 : One-Seg service begins

2011 : Analog television broadcasting ends

2016 : 8K Super Hi-Vision satellite test broadcasting

2018 : 8K Super Hi-Vision satellite broadcasting

U.S.-made television purchased for

the home of the first subscriber

2013 : Hybridcast begins

Completed March 2002

High-rise building:

 14 floors above ground, two below ground

Mid-rise building:

 6 floors above ground, two below ground

Total floor space: Approx. 46,000 m2

Total research area: Approx. 16,000 m2

Total land area: Approx. 33,000 m2

Patents held : Domestic 1,873

International 139

Degree-holding personnel 87

Director of STRL Toru Kuroda

Deputy Director of STRL Kohji Mitani

Executive Research Engineer Tetsuomi Ikeda

Executive Research Engineer Tomohiro Saito

Shinichi Sakaida

Masakazu Iwaki

Hiroshi Kikuchi

Keiji Ishii

Toru Imai

Kenji Nakashima

Toshio Nakagawa

Shunji Nakahara

Akira Hiramoto

Research planning/management, public relations, international/domestic liaison, etc.

Patent applications and administration, technology transfers, etc.

Integrated broadcast-broadband technology, Hybridcast, IT security, broadband video service technology,

video content analysis, etc.

Satellite/terrestrial broadcast transmission technology, 8K contribution technology, multiplexing technology,

IP transmission technology, etc.

8K program production equipment, video coding technology, highly realistic sound systems, etc.

Speech recognition/synthesis, machine translation, social media analysis, automatic sign language CG system,

automatic audio description generation system, etc.

Spatial 3D video system technology (integral 3D, etc.), 3D display device technology, cognitive science and

technology, etc.

Ultrahigh-resolution and ultrasensitive imaging devices, high-capacity fast-write technology, sheet-type display

technology, etc.

Personnel, labor coordination, accounting, building management, etc.

Established in June 1930

June 1930 - January 1965 Technical Research Laboratories

January 1965 - July 1984 Technical Research Laboratories,

Broadcast Science Research Laboratories

July 1984 - Present Science & Technology Research Laboratories

Employees 258 (including 230 researchers)

Planning & Coordination Division

Patents Division

Internet Service Systems Research Division

Advanced Transmission Systems Research Division

Advanced Television Systems Research Division

Human Interface Research Division

Three-Dimensional Image Research Division

Advanced Functional Devices Research Division

General Affairs Division

The STRL Open House is held every year in

May to introduce our R&D to the public.

(at end of FY 2017)

The NHK Science & Technology Research Laboratories (NHK STRL) is the sole research facility in Japan specializing in broadcasting technology, and as part of the public broadcaster, its role is to lead Japan in developing new broadcasting technology and contributing to a rich broadcasting culture.

(at end of FY 2017)

Page 59: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

Seijogakuen-mae Soshigaya-Okura Odakyu Line

To ShinjukuNHK STRL

N

Bus stopBus stop

Bus stopToho

Natl. Ctr. for Child Health

and Development

TomeiExpressway

Kinuta Koen(Park)

Ring Road No. 8/Kanpachi Dori

Setagaya Dori

Yoga

Tokyu

Den-en-toshi line

To Shibuya

Shuto

Expressway

Yoga I. C.

■Odakyu line, from Seijogakuen-mae station, south exit:

[Odakyu Bus/Tokyu Bus]・Shibu 24(渋24) toward Shibuya Station[Tokyu Bus]・To 12(等12) toward Todoroki-soshajo・Yo 06(用06) toward Yoga Station(weekdays only)・Toritsu 01(都立01) toward Toritsu Daigaku Station, north exit■Tokyu Den-en-toshi line, fromYoga station:

[Tokyu Bus]・To 12(等12) toward Seijo-gakuen-mae station・Yo 06(用06) toward Seijo-gakuen-mae station(weekdays only)

In all cases, get off the bus at the “NHK STRL”(NHK技術研究所)bus stop

Nippon Hoso Kyokai(NHK)Science & Technology Research Laboratories(STRL)1-10-11 Kinuta, Setagaya-ku, Tokyo

Edited and Published by:

Access to NHK STRL

Directions

http://www.nhk.or.jp/strl/index-e.htmlTel:+81-3-3465-1111

Page 60: Annual Report - nhk.or.jp · prototyped an active-matrix-driven spin-SLM using a tunnel magnetoresistance element as a display device and successfully demonstrated the display of

NHKScience & TechnologyResearch Laboratories

2017

2017

September 2018

Annual Report