Multimodal Information Architecture

contribution on Artificial Intelligence developments

Authors

Keywords:

Multimoda Information Architecture, Artificial Intelligence, Information Organization Techniques

Abstract

To present Multimodal Information Architecture contributions on information organizing on training artificial neural networks, aiming to position information science as an active body of knowledge in artificial intelligence problems. The definitions of Multimodal Information Architecture were used following the technological phase with an explanatory and qualitative approach. A five-step procedure is proposed for delineating, analyzing and transforming the informational space to be used in neural network training and learning methods, in order to complement gaps identified by authors focused on computer science implementations. Great potential for developing a structured method of Multimodal Information Architecture was observed, which would provide instruments for organizing data pre-processing which are used as test and learning sample in artificial neural networks. This method could place information science as an actor and producer of artificial intelligence solutions, replacing its current role as consumer of prefabricated solutions made by computer science.

Downloads

Download data is not yet available.

References

Arel, I.; Rose, D. C.; Karnowski, T. P. Deep machine learning-a new frontier in artificial intelligence research [research frontier]. IEEE Computational Intelligence Magazine, v. 5, n. 4, p. 13-18, 2010.

Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473, 2014.

Carnielli, W.; Pizzi, C. Modalities and multimodalities. [S.l.]: Springer Science & Business Media, 2008.

Devlin, J. et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805, 2018.

Filho, J. A. W. et al. The brwac corpus: a new open resource for brazilian portuguese. In: Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018). [S.l.: s.n.], 2018.

Hinton, G. E.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural Computation, v. 18, n. 7, p. 1527-1554, 2006.

Hjørland, B. What is knowledge organization (ko)? Knowledge Organization, v. 35, n. 2/3, p. 86-101, 2008.

Jones, K. S. Index term weighting. Information Storage and Retrieval, v. 9, n. 11, p. 619-633, 1973.

Kress, G. What is mode? In: Jewitt, C. (ed.). The Routledge Handbook of Multimodal Analysis. London: Routledge, 2009.

Kress, G.; Van Leeuwen, T. Multimodal discourse: The modes and media of contemporary communication. London, UK, 1 Ed. Hodder Arnold Publication, 2001. 142 p.

Kuroki Júnior, G. H. Sobre uma arquitetura da informação multimodal: reflexões sobre uma proposta epistemológica. 2018. 236 f. Dissertação (Mestrado em Ciência da Informação) — Universidade de Brasília, Brasília, 2018. Doi: http://dx.doi.org/10.26512/2018.02.D.31920.

McCann, B. et al. Learned in translation: Contextualized word vectors. Advances in Neural Information Processing Systems, v. 30, 2017.

Minaee, S. et al. Deep learning-based text classification: a comprehensive review. ACM Computing Surveys (CSUR), v. 54, n. 3, p. 1-40, 2021.

Mikolov, T. et al. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems, v. 26, 1, 2013a. Disponível em: https://proceedings.neurips.cc/paper/2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf. Acesso em 30 de junho de 2023.

Mikolov, T. et al. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013b. Disponível em: https://arxiv.org/pdf/1301.3781.pdf%C3%AC%E2%80%94%20%C3%AC%E2%80%9E%C5%93. Acesso em: 30 jun. 2023.

Pennington, J.; Socher, R.; Manning, C. D. Glove: Global vectors for word representation. Proceedings of the Conference on Empirical Methods In Natural Language Processing, p. 1532–1543, 2014.

Peters, M. E. et al. Deep contextualized word representations. Association for Computational Linguistics New Orleans, 2018. Disponível em: https://aclanthology.org/N18-1202/. Acesso em: 30 jun. 2023.

Portner, P. Modality. [S.l.]: Oxford University Press, 2009.

Qiu, X. et al. Pre-trained models for natural language processing: a survey. Science China Technological Sciences, v. 63, n. 10, p. 1872-1897, 2020.

Radford, A. et al. Improving language understanding by generative pre-training. 2018. Preprint. Disponível em: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf. Acesso em: 30 jun. 2023.

Souza, F.; Nogueira, R.; Lotufo, R. BERTimbau: pretrained BERT models for Brazilian Portuguese. In: Brazilian Conference on Intelligent Systems, BRACIS, 9., 2020, Rio Grande do Sul. Proceedings […]. [S.l.:s.n.], 2020.

van Gigch, J. P.; Moigne, J. L. L. A paradigmatic approach to the discipline of information systems. Behavioral Science, v. 34, n. 2, p. 128-147, 1989.

Vaswani, A. et al. Attention is all you need. Advances in Neural Information Processing Systems, v. 30, 2017. Disponível em: https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Acesso em: 30 de Junho de 2023.

Wason, R. Deep learning: evolution and expansion. Cognitive Systems Research, v. 52, p. 701-708, 2018.

Published

2023-08-17

How to Cite

Kuroki Júnior, G. H., & Gottschalg-Duque, C. (2023). Multimodal Information Architecture: contribution on Artificial Intelligence developments. Transinformação, 35, 1–18. Retrieved from https://periodicos.puc-campinas.edu.br/transinfo/article/view/6729

Issue

Section

Original