Misplaced Pages

Theoretical computer science

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by DemocraticLuntz (talk | contribs) at 18:21, 8 September 2014 (Scope: fixed grammar error). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 18:21, 8 September 2014 by DemocraticLuntz (talk | contribs) (Scope: fixed grammar error)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff) This article is about the branch of computer science and mathematics. For the journal, see Theoretical Computer Science (journal).

Theoretical computer science is a division or subset of general computer science and mathematics which focuses on more abstract or mathematical aspects of computing and includes the theory of computation.

Scope

It is not easy to circumscribe the theory areas precisely and the ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) describes its mission as the promotion of theoretical computer science and notes:

The field of theoretical computer science is interpreted broadly so as to include algorithms, data structures, computational complexity theory, distributed computation, parallel computation, VLSI, machine learning, computational biology, computational geometry, information theory, cryptography, quantum computation, computational number theory and algebra, program semantics and verification, automata theory, and the study of randomness. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.

To this list, the ACM's journal Transactions on Computation Theory adds coding theory, computational learning theory and theoretical computer science aspects of areas such as databases, information retrieval, economic models and networks. Despite this broad scope, the "theory people" in computer science self-identify as different from the "applied people." Some characterize themselves as doing the "(more fundamental) 'science(s)' underlying the field of computing." Other "theory-applied people" suggest that it is impossible to separate theory and application. This means that the so-called "theory people" regularly use experimental science(s) done in less-theoretical areas such as software system research. It also means that there is more cooperation than mutually exclusive competition between theory and application.

P Q {\displaystyle P\rightarrow Q\,} P = NP ?
Mathematical logic Automata theory Number theory Graph theory Computability theory Computational complexity theory
GNITIRW-TERCES Γ x : I n t {\displaystyle \Gamma \vdash x:Int}
Cryptography Type theory Category theory Computational geometry Combinatorial optimization Quantum computing theory

History

Main article: History of computer science

While formal algorithms have existed for millennia (Euclid's algorithm for determining the greatest common divisor of two numbers is still used in computation), it was not until 1936 that Alan Turing, Alonzo Church and Stephen Kleene formalized the definition of an algorithm in terms of computation. While binary and logical systems of mathematics had existed before 1703, when Gottfried Leibniz formalized logic with binary values for true and false. While logical inference and mathematical proof had existed in ancient times, in 1931 Kurt Gödel proved with his incompleteness theorem that there were fundamental limitations on what statements could be proved or disproved.

These developments have led to the modern study of logic and computability, and indeed the field of theoretical computer science as a whole. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory.

With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render most modern public key cryptography systems uselessly insecure.

Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed.

Organizations

Journals and newsletters

Conferences

See also

Notes

  1. "SIGACT". Retrieved 2009-03-29.
  2. "ToCT". Retrieved 2010-06-09.
  3. "Challenges for Theoretical Computer Science: Theory as the Scientific Foundation of Computing". Retrieved 2009-03-29.
  4. ^ The 2007 Australian Ranking of ICT Conferences: tier A+.
  5. ^ The 2007 Australian Ranking of ICT Conferences: tier A.
  6. FCT 2011 (retrieved 2013-06-03)

Further reading

External links

Computer science
Note: This template roughly follows the 2012 ACM Computing Classification System.
Hardware
Computer systems organization
Networks
Software organization
Software notations and tools
Software development
Theory of computation
Algorithms
Mathematics of computing
Information systems
Security
Human–computer interaction
Concurrency
Artificial intelligence
Machine learning
Graphics
Applied computing
Categories:
Theoretical computer science Add topic