Automating Linguistics offers an in-depth study of the history of the
mathematisation and automation of the sciences of language. In the wake
of the first mathematisation of the 1930s, two waves followed: machine
translation in the 1950s and the development of computational
linguistics and natural language processing in the 1960s. These waves
were pivotal given the work of large computerised corpora in the 1990s
and the unprecedented technological development of computers and
software.Early machine translation was devised as a war technology
originating in the sciences of war, amidst the amalgamate of
mathematics, physics, logics, neurosciences, acoustics, and emerging
sciences such as cybernetics and information theory. Machine translation
was intended to provide mass translations for strategic purposes during
the Cold War. Linguistics, in turn, did not belong to the sciences of
war, and played a minor role in the pioneering projects of machine
translation.Comparing the two trends, the present book reveals how the
sciences of language gradually integrated the technologies of computing
and software, resulting in the second-wave mathematisation of the study
of language, which may be called mathematisation-automation. The
integration took on various shapes contingent upon cultural and
linguistic traditions (USA, ex-USSR, Great Britain and France). By
contrast, working with large corpora in the 1990s, though enabled by
unprecedented development of computing and software, was primarily a
continuation of traditional approaches in the sciences of language
sciences, such as the study of spoken and written texts, lexicography,
and statistical studies of vocabulary.