From its beginnings in the 1960s, writing software has evolved into a profession concerned with how best to maximize the quality of software and of how to create it.
How best to create high quality software is a separate and controversial problem covering software design principles, so-called “best practices” for writing code, as well as broader management issues such as optimal team size, process, how best to deliver software on time and as quickly as possible, work-place “culture”, hiring practices, and so forth. All this falls under the broad rubric of software engineering.
You’ve probably heard of the first widely used programming language — IBM’s Fortran — which was released in 1957 for mathematical and scientific computing. Another, Cobol, was released by the US Department of Defense in 1962 for use in business applications.
Over the following decades, the discipline of programming saw a familiar tension between the scientific thinking of academia, which tended to seek idealized solutions to engineering challenges, and the practical needs of an industry faced with real-life time and cost pressures (and bloated code bases). The early 70’s saw the emergence of key ideas in systems thinking which allowed engineers to break these giant projects into modular (and much more manageable) pieces that communicated via interfaces.