REFERENCES 
Namatame, M., Kitamura M, Iwasaki, S. (2020). The 
science communication tour with a sign language 
interpreter.  Pacific Rim International Conference on 
Disability and Diversity Conference Proceedings, 
Center on Disability Studies, University of Hawai'i at 
Mānoa. 
Wakatsuki, D., Kato, N., Shionome, T., Kawano, S., 
Nishioka, T., and Naito, I. (2017). Development of 
web-based remote speech-to-text interpretation system 
captiOnline.  Journal of Advanced Computational 
Intelligence and Intelligent Informatics, 21(2), 310–
320. 10.20965/jaciii.2017.p0310 
Orhan, C. (2019). A comparative study on indoor 
soundscape in museum environments, Thesis (M.S.), 
Bilkent University, http://hdl.handle.net/11693/52316 
Shuko, K. (2003). Analysis of the exhibition method in 
“exhibition on the theme of sound” of museums, 
Cultural Information Resources, 10(2). 
Gaver, W. W. (1993). What in the world do we hear?: An 
Ecological approach to auditory event perception. 
Ecological Psychology, 5(1), 1–29. 
10.1207/s15326969eco0501_1. 
Tabaru, K., Harashima, T., Kobayashi, Y., and Katada, A. 
(2011). Effects of aided audible frequencies and 
contextual information on identification of 
environmental sounds by individuals with hearing 
impairments: Analysis of individual cases. The 
Japanese Journal of Special Education. 48(6), 521–
538. 10.6033/tokkyou.48.521. 
Inverso, Y. and Limb, C. J. (2010). Cochlear implant-
mediated perception of nonlinguistic sounds. Ear and 
Hearing. 31(4), 505–514. 10.1097/AUD.0b013e3181d 
99a52. 
Kato, Y., Hiraga, R., Wakatsuki, D., and Yasu, K. (2018). 
A preliminary observation on the effect of visual 
information in learning environmental sounds for deaf 
and hard of hearing people. ICCHP 2018, Proceedings 
(1), 183–186. 10.1007/978-3-319-94277-3_30. 
Shafiro, V., Sheft, S., Kuvadia, S., and Gygi. B.
  (2015). 
Environmental sound training in cochlear implant 
users.  Journal of Speech, Language, and Hearing 
Research. 58(2), 509–519. 10.1044/2015_JSLHR-H-
14-0312 
Matthews, T., Fong, J., Ho-Ching, F.W.L., and Mankoff, J. 
(2006). Evaluating non-speech sound visualizations for 
the deaf. Behaviour & Information Technology. 25(4), 
333–351. 10.1080/01449290600636488.  
Goodman, S., Kirchner, S., Guttman, R., Jain, D., 
Froehlich, J., and Findlater, L. (2020).  Evaluating 
smartwatch-based sound feedback for deaf and hard-of-
hearing users across contexts. In Proceedings of the 
2020 CHI Conference on Human Factors in Computing 
Systems. Computing Machinery, 1–13. 
10.1145/3313831.3376406.  
Findlater, L., Chinh, B., Jain, D., Froehlich, J., 
Kushalnagar, R., and Lin. A. C. (2019). Deaf and hard-
of-hearing individuals' preferences for wearable and 
mobile sound awareness technologies. In Proceedings 
of the 2019 CHI Conference on Human Factors  
in Computing Systems. Paper 46, 1–13. 
10.1145/3290605.3300276. 
Guo, R., Yang, Y., Kuang, J., Bin, X., Jain, D., Goodman., 
Findlater, L., and Froehlich, J. (2020) HoloSound: 
Combining speech and sound identification for deaf or 
hard of hearing users on a head-mounted display. In The 
22nd International ACM SIGACCESS Conference on 
Computers and Accessibility (ASSETS '20), Article 71, 
pp. 1–4. 10.1145/3373625.3418031.