top | item 14944404

Software Engineering ≠ Computer Science (2009)

482 points| nreece | 8 years ago |drdobbs.com | reply

305 comments

order
[+] alkonaut|8 years ago|reply
Software engineers need to know to recognize and classify problems in CS. You need to know what algorithms and data structures exist, what their properties are, and what they are called. The areas that come up will come from Math and Computer science (which are closely related). A solid computer scientist person knows how to derive some Dijksttra algorithm from first principles. A good software engineer recognizes the problem at hand, and recalls the algorithm to pick when presented with the problem.

What is that problem in front of you? Gradient descent? Tree traversal? Multiple dispatch? Path finding? What structure represents the data or algorithm? Ring buffer? Blocking queue? Bloom filter?

You rarely need to remember a pathfinding algorithm or trie implementation by heart. What's important is that you a) recognized the problem at hand as "path finding", "bin packing" or whatever. Terminology is important here. The good software engineer needs to know the proper names for a LOT of things. Recognizing and labeling problems means you can basically look up the solution in no time.

So CS is definitely very relevant for software engineering - but you need a broad understanding instead of a deep one.

There is always the argument that a lot of devs basically to monotone work with SQL and some web thing in node and rarely even reach for a structure beyond a list or map. That's true - but sooner or later even they bounce into a performance or reliability issue that's basically always due to incorrect choice of data structure or algorithm. I'm only half joking when I suggest that most of todays "scaling" is compensating for CS mistakes in software.

[+] TheAceOfHearts|8 years ago|reply
Sometimes you're just forced to accept you need to take some shortcuts. There's a few fields for which my general approach is to just try and maintain a mental index of "when might I want to use this".

I'd have a hard time implementing my own crypto, but I've learned enough to know how to use it to secure communications, hide or protect information, ensure no alterations have been made to some arbitrary asset, identify an asset's source, etc.

I love working with a well understood and boring RDBMS. It's predictable and it lets you quickly move on to other problems. But you still need to have a good understanding of how it's implemented in order to store and query your data efficiently. If you have a poor understanding of how indexing works, you'll probably have a hard time selecting the right data model.

There's actually lots of fun problems in the frontend world. Try to write a multi-touch gesture responder, it's very tricky to get things right. How about a natural animation system that allows interruptions? CSS animations tend to look unnatural because they're largely time-based, and they don't handle interruptions very well. (Spoiler alert: springs are the magic sauce.)

Learning about compilers unlocks lots of powerful skills too. You can implement your own syntax highlighting, linter, refactoring tools, autocomplete, etc.

[+] trevyn|8 years ago|reply
Eh, I'd much rather "software engineers" had good product and business sense, since most product managers and CEOs sure don't. No point in building the wrong thing well.
[+] noir_lord|8 years ago|reply
I don't have a formal CS background and have indeed run into the issues you've described.

I generally resort to Google and then go find the best approach and implement it if necessary.

I taught myself the basics of the underlying stuff (and it helps that I'm an older developer who grew up on Turbo Pascal and C since I do have a working knowledge of what the machine is doing underneath).

Those are rare cases though.

[+] StreamBright|8 years ago|reply
>> A good software engineer recognizes the problem at hand,

>> and recalls the algorithm to pick when presented with the

>> problem.

In my experience people able to pick the right algo straight away is extremely rare.

[+] Illniyar|8 years ago|reply
Everything you suggested software engineers needs to know was covered in two courses in mt university- one called data structures, the other called algorithms.

And those didn't really need any requirements apart from basic math.

There is absolutely no reason for a software engineer to learn abstract algebra, infinitesimal math or any of the other dozen courses that you'll never ever use.

And even then, throughout my 10 years now, I can count on one hand the number of times I actually needed to use these things.

[+] vmware513|8 years ago|reply
Where could we find a kind of comprehensive list of CS problems, algorithms with association of the real problem what it can solve in practice?

You mentioned a few here, I suppose, there could be a full list already somewhere. :)

[+] davidreiss|8 years ago|reply
> So CS is definitely very relevant for software engineering - but you need a broad understanding instead of a deep one.

Absolutely. A general understanding of CS is necessary to be a competent software engineer. Just like a general understanding of physics is necessary to be a civil/mechanical engineer.

If there is a VENN diagram, there is definitely an overlap of "theory" and "engineering". But theory != engineering.

[+] jolux|8 years ago|reply
I don't understand why the author is so suspicious of formal methods. Other engineering disciplines are based on the application of solid, well-understood principles from the natural sciences to practical problem domains. There are few solid, well-understood principles in computer science that are directly and obviously applicable to software engineering so far.

I vigorously contest the idea that software engineering cannot be rigorous and so shouldn't try.

[+] sillysaurus3|8 years ago|reply
Because there are what, six types of bridges? (EDIT: 36 according to wikipedia.)

There are six thousand types of programs (as a wild guess), and they all interact with each other in an exponential explosion of complexity.

For a formal method to work, it has to be generally applicable across a wide range of situations. There are methods like that in software engineering, and you see them in situations where the program is potentially life-threatening. But most programs would be hindered by this rigor.

[+] peteretep|8 years ago|reply

    > why the author is so suspicious of
    > formal methods
I studied Z-Notation and CSP at what is perhaps the home of formal methods (cs.ox.ac.uk), and I've yet to come across a real-world situation where I've found either in the least bit useful.
[+] ScottBurson|8 years ago|reply
I agree with you about the potential value of formal methods, but I think you completely missed the point of the article. Formal methods can't supply specifications. They can perhaps tell you if your specification is consistent and therefore implementable, but they can't tell you whether the system you've specified will meet your needs.

I think those of us who promote formal methods need to remember this. At best only verification -- making sure the implementation matches the specification -- will ever be fully automated. Validation -- making sure that what we specified is actually what we wanted -- will always be a human activity.

[+] auggierose|8 years ago|reply
It is right to be suspicious of formal proof. In most areas of software engineering, employing formal proof makes you about 10-1000 times slower. Knowing how to do a formal proof in principle though lets you often reap a lot of the benefit without actually getting slowed down much. This is similar to how mathematicians know how to prove something in principle without running it through Coq or Isabelle.

By the way, Curry Howard is just one way of doing formal proof (one I personally don't like). There are many foundational and practical problems that need to be solved before formal proof is ready to go mainstream (but I am convinced that it will one day).

[+] bluetwo|8 years ago|reply
Because entropy. The natural state of the world is chaos.
[+] peterburkimsher|8 years ago|reply
Here's the graphic transcribed as text for non-English speakers.

Software Engineering: Requirements, Modifiability, Design Patterns, Usability, Safety, Scalability, Portability, Team Process, Maintainability, Estimation, Testability, Architecture Styles.

Computer Science: Computability, Formal Specification, Correctness Proofs, Network Analysis, OS Paging/Scheduling, Queueing Theory, Language Syntax/Semantics, Automatic Programming, Complexity, Algorithms, Cryptography, Compilers.

In my opinion, some of those could be on the other side of the line (estimation could be CS, language syntax/semantics and network analysis could be SE). But I agree with the general division.

I studied Electronic Systems Engineering, but somehow always found jobs in software companies. One problem I struggle with is the division between DRY (Don't Repeat Yourself) and WET (Write Everything Twice) coding styles.

Most programmers hate it when code is repeated. They prefer to spend days trying to integrate external libraries instead of just copying the necessary functions into the main branch. There are good reasons for this (benefiting from new features when the library gets updated), but there are also risks (the code breaking when the library gets updated).

Software Engineering priorities include Safety, Portability, Modifiability, and Testability. I interpret that as a WET programming style. "If you want it done well, do it yourself." There's no arguing about responsibility then - the code is mine, and I should fix it if it breaks.

[+] fny|8 years ago|reply
I don't think you understand DRY. It's a concern within the code you write rather than without. Whether you choose to freeze your dependencies is an entirely different concern.

Say, for example, you have a complicated condition you test for frequently within your code. DRY is when you decide to extract that condition into a testable function you can rely on everywhere in your code (e.g. `isLastThursdayOfMonth(date)`) You can extend this same DRY thinking to all the other abstraction tools (e.g. types/classes) you have as an engineer too. I'm sure you'd agree that it would be an enormous liability and maintainability nightmare to rewrite the logic for that function everywhere. God forbid you're ever asked to change your littered logic to the equivalent of `isLastWeekendOfMonth(date)`.

[+] dragonwriter|8 years ago|reply
> Software Engineering priorities include Safety, Portability, Modifiability, and Testability. I interpret that as a WET programming style. "If you want it done well, do it yourself.

None of those demand “write everything yourself”, only setting the same criteria for external code you integrate as you would have for code you write yourself.

[+] deathanatos|8 years ago|reply
> but there are also risks (the code breaking when the library gets updated).

This is the entire point of Semantic Versioning: to communicate breaking changes through the version number, and to build tooling to programmatically avoid breaking dependent code.

(No, it isn't generally perfect: it does require that human realize what the API is that that a given change is breaking it. If we had some programmatic language for specifying the API… type systems start this, but tend to not capture everything¹)

¹I suspect there are some formal analysis folks who know more than I do here, screaming that there is a better way. I work in Python day-to-day, so generally, it's all on the human.

[+] knocte|8 years ago|reply
> Computer Science: Computability, Formal Specification, Correctness Proofs, Network Analysis, OS Paging/Scheduling, Queueing Theory, Language Syntax/Semantics, Automatic Programming, Complexity, Algorithms, Cryptography, Compilers.

It's interesting/funny that when talking about CS, or more academic point of views around software development, the terms "Formal Specification" and "Correctness" are often mentioned, yet most CS students/labs still use languages that are really badly suited for this job, such as dynamically typed languages like Python.

[+] lgas|8 years ago|reply
How does copying the english words from the image to english words as text help non-English speakers?
[+] ChuckMcM|8 years ago|reply
I've seen similar articles to this one, both in print and on web sites. I used to explain it to people as the difference between 'coders' and 'engineers' but I think my own hubris at having a degree got in the way of my thinking on it.

Over the decades I've met a bunch of people who program computers for a living, and there is clearly a spectrum where on one end is a person who spends the weekend benchmarking different sort algorithms under different conditions for the fun of it, and the guy who left the office at 5PM once an integration test passed on a piece of code that he pasted in from stack overflow was deemed to have no regressions. There are many different disciplines that have such a wide dynamic, from chefs who spend their weekends trying different flavors to cooks who take frozen patties out, reheat them and serve. Painters who throw human emotion into a painting and painters who lay down a yellow line on a road according to a template for $20/hr.

It seems to me that most, of not all, of the 'theory' stuff in computer science is just math of one form or another. This is not unlike all the 'theory' stuff in electrical engineering is just physics. You can do the tasks without the theory, but you rarely invent new ways of doing things without that understanding.

But just like carpenters and architects there is a tremendous amount of depth in the crafting of things. That brilliance should be respected, college trained or not, so trying to 'split' the pool doesn't lead to any good insights about what being a good engineer is all about.

[+] zerr|8 years ago|reply
You seem to be labeling overworking as "good" and in-time as "bad"... Here's an alternative perspective - I've seen underperformers who try to make it up by staying late and even allocating weekends for their employers, on the other hand, I've seen brilliant engineers who do the job and leave the office at 5PM, spending personal time on recreation, including learning other tech/stuff not relevant to their employers...
[+] Clubber|8 years ago|reply
>I used to explain it to people as the difference between 'coders' and 'engineers'

This isn't going to be popular, but it's true.

Coders are what people called themselves before business started making the decisions about how to write software.

Engineers are what people called themselves after business started making the decisions about how to write software.

Guess what? People who write software aren't engineers, they are programmers. You have crap programmers and you have exceptional programmers, but you they are paid to write programs. "Software Engineer" is as valid as a "Sanitation Engineer."

Coder is a slang for programmer because they write "source code." it dates back to at least the 80s, probably earlier. It is also acceptable term for programmer.

If you have to make up a fancy term, call yourselves software developer or software designer. If you want to be called an engineer, go to engineering school.

https://www.theatlantic.com/technology/archive/2015/11/progr...

[+] halfnibble|8 years ago|reply
I didn't study Computer Science in college. Not one single course. But I'm not stupid. I made straight A's in math through Calculus III. So a lot of these comments frustrate me. I've taught myself literally everything I know. I've read dozens of books. I practice coding obsessively--it's my passion. Do I "get shit done"? Yes, absolutely. Do I not care about the efficiency of my algorithms? No, I care deeply. I don't always know the "computer-sciency" term for things. But my goodness, get off your high horse and tell me what you want accomplished. Chances are I'll implement a solution that's just as efficient and arguably much better than most "engineers" can. And no, I'm not going to be obsolete at age 40. By the time I reach age 40, PhD's will be coming to be for advise. Because I didn't study computer science in college. I'm studying it for life.
[+] ijidak|8 years ago|reply
I think your type is the exception rather than the rule. I have a background in computer science and I love the beauty of correctness. But at work, I constantly find myself struggling to get my fellow developers (even with CS degrees) to think in terms of mathematical correctness instead of just, "how can I get this program to work today?" For example, can we define our methods to use the most general base type that is appropriate, or an interface that is yet more strict, and guard clauses that, the combination of which, create a method that mathematically cannot fail except for upstream, downstream, or machine level (e.g. out of memory) issues. If we do so consistently, then we have rock solid methods that we can trust. But usually I find that my co-workers don't want to think in terms of these sort of rock solid contracts. And instead they just want to get the program to work today; To get their scrum story done. Inevitably they keep going back to these methods to fix a scenario they didn't anticipate. It's such a colossal, collective waste of time. I don't think everyone has to study computer science, but an appreciation that programming can be more mathematically precise than many programmers try to make it would benefit the industry. It would drastically reduce bug counts, improve the ability to reason on code, and increase developer productivity.
[+] foepys|8 years ago|reply
Why do you consider asking for advice as a bad thing or being inferior? Every PhD knows that if you don't know something, you ask somebody that does. Collaborating is a huge part of academia. If you think you are better than a PhD just because they ask you stuff, you are definitely wrong.

You seem to have a completely wrong understanding about why people do a PhD. They (okay, most) don't do it to attach a title to their name but because they are passionate about a specific field and want to expand their and other's knowledge about it.

[+] deelowe|8 years ago|reply
That works great for you. What happens when my objective is to develop a solution that requires 100s of developers, spans multiple years, costs billions, and has major liability concerns?
[+] ajarmst|8 years ago|reply
I'm convinced that the only useful definition of a Software Engineer is "someone who has 'Software Engineer' in their job title". Most other Engineering disciplines are far more rigorously defined. That said, observing a disconnect between theory and application is hardly novel or unique to software disciplines.
[+] amw-zero|8 years ago|reply
> all computer hardware is essentially equivalent.

This is quite inaccurate. Hardware directly influences software. "if" statements, functions, and threads didn't exist at one time, and all require explicit hardware support. I believe that as we come up with different abstract constructs at the hardware level, we'll influence the possible software that can be written.

[+] DonaldFisk|8 years ago|reply
Sometimes true (e.g. the PDP-11 instruction set did influence C), but software can also influence hardware.

Many early computers had very rudimentary subroutine call mechanisms (e.g. the B-line of the Elliott 803), but this didn't prevent programmers from using functions which returned values, sometimes recursively.

Burroughs mainframes were designed to run Algol 60 (with a few additional instructions for use by COBOL programs), and Lisp Machines were designed to run Lisp. In these cases, the influence of the languages extended to the entire instruction set. This is a better approach, as it's easier to experiment with language design than it is with hardware design.

[+] ajarmst|8 years ago|reply
There are many abstract concepts that need to be realized before we can usefully call a particular device a 'computer'. Many, but certainly not all, of these, are gathered up into things like Turing Completeness, Von Neumann Architecture, etc. On that (admittedly mostly theoretical) scale, it is meaningful to discuss computers as a broad class having certain characteristics. That's what allows us to reason effectively about things like efficiency and correctness in computer algorithms. They even allow us to meaningfully compare digital, analog, mechanical and quantum computers, despite radical differences in the physical hardware. If the object you're showing me doesn't support conditional behaviour ('if' statements), it's going to be pretty hard to convince me to discuss it as though it were a computer.
[+] goatlover|8 years ago|reply
As an example of different hardware, there was a Russian engineer who built a couple of ternary computers in the 70s. And of course there have been analog computers.

Quantum computers would certainly not be considered "essentially equivalent".

[+] hestefisk|8 years ago|reply
Software engineering is where the rubber hits the road in terms of requirements definition, creating a solid design, fitting stuff into an existing legacy environment (SAP anyone? Java EE?), iterating prototypes with stakeholders... and usually in large corporations. It was out of many years of budget overruns in defense procurement that software engineering cornerstones such as CMMI emerged.

To me, the essence of software engineering is that 20% is about building the 'good' solution itself, e.g. architecture, code, release / deployment, ... the remainder of the engineering is navigating / tolerating the inherent corporate messiness of politics, opinions, power, and everything else... engineering the solution is the easy part; engineering good requirements and quality is tough.

[+] partycoder|8 years ago|reply
Let's revisit the definition of "engineering", in a simplified form:

    Science -> Engineering -> Technology
Engineering borrows scientific[1] knowledge to create[2] technology[3]

[1]: or empirical knowledge

[2]: or maintain or implement

[3]: or processes

The relationship between science and engineering has been clear for a while now, even before the appearance of software engineering.

There's a lot of science at work in existing software, so it would be inaccurate to say that software is "unscientific". However not many people get to work on those projects.

A vast majority of people can make a decent living working on user facing technologies built with existing technology. At that level appealing to non-technical stakeholders has much more weight than applying engineering rigor.

But that's not the reality for everyone.

[+] tim333|8 years ago|reply
The author has a slightly funny use of the word engineering. If you look at its use in a conventional field like making cars then the science bit is the basic physics and chemistry of how gasses expand etc, the engineering is designing the machinery so the brakes work, the engine produces enough power and doesn't break and the like and then human issues like whether the workers go on strike or the end users are idiots and crash are not engineering.

Similarly I'd say in software the engineering bit is making reliable systems that are fault tolerant and secure and so on and then the people bits like the user interface are something like design and psychology, not engineering.

[+] drawkbox|8 years ago|reply
Most developer jobs contain parts of both, with more time spent in software engineering.

Software development, app development, game development, web development are all probably 90+% software engineering and 1-10% computer science depending on the project. Specific projects may differ such as writing standard libraries, engines, data, standards, teaching, etc. In the end most of it is production and maintenance as part of shipping.

[+] Chiba-City|8 years ago|reply
These are complicated terms. Harvard's CS was part of their Applied Math department. There are Applied Maths of scheduling programmatic Engineering outcomes for sure. Fred Brooks taught us all that.

I studied Russell, Godel, Tarski and Quine and then compiler and runtime logic (as a Philosophy major). Back then CS was mostly a realm of 3-Page proofs on alpha renaming or newfangled Skip List speed/space utility.

As an old VAX/Sun or 512K/DOS C programmer working in DC for decades around lots of TC, datacenter and transaction processing folks, an SE MUST have basic speed/space, set theoretic, programming by contract, data integrity and MTBF abstractions in their heads while they plan and develop. Both accuracy and performance against test and measure just matter for the business cases 24/7.

Content software developers patching together framework components on 2 day schedules for consumer Web bloatware rarely understand something like data integrity needs of billing system logic embedding in redundant switches failing over on rough schedules. Typing commands is not even Software Engineering.

Software Engineering is not an individual identity phenomenon. SE is how groups show responsibility for stakeholder outcomes unrelated to paychecks. First rule of SE is everyone on the team passes the bus test. Nobody is essential. Unless we seek luck, we can't improve what we don't measure. Learning how and what to measure takes real training and group method application. So many out there never know what they are missing.

Business competition minus lucky windfalls is largely based on COST ACCOUNTING. Successful operations will discover heat dissipation costs challenges. Basic CS speed/space, contract covenant assertions, data integrity and MTBF logic in Software Engineers translates very easily into understanding business innovation problems.

[+] autokad|8 years ago|reply
rarely has asymptotic complexity mattered to my code. usually the most important factor is modularization and readability. i spend more of my time reading or re-using code, and my time is more expensive than a computer. plus, highly optimized code can sometimes be unreadable and lead to bugs, which are also more costly.
[+] saimiam|8 years ago|reply
> asymptotic complexity

If it hasn't mattered to you, it's probably because you are using libraries or apis which have solved for optimal performance.

In short, performance mattered a lot to you code. Only, you didn't slog long hours to make it so.

Back to the topic at hand, if you didn't spend time to understand why a particular module or library is part of your code base - be it for performance or maintainability or any other -ities - you're halfassing your job as a software engineer. Would a structural engineer ever claim with a straight face that they have never worried about the integrity of their struts? That's basically what you said with your claim.

[+] maxxxxx|8 years ago|reply
I have seen the impact of linear or exponential complexity quite a bit in real world code so I think it's good to be aware of it in case you are having performance problems.
[+] k__|8 years ago|reply
In Germany we have Informatik, which was treated as CS&SE long time.

Lately there are sprouting more and more SE degrees.

On the other hand we also have universities of applied science, where Informatik is often more like SE

[+] flavio81|8 years ago|reply
In Peru we also have "Informatics Engineering", which is my degree. It is a mixture of CS courses (and a lot of math courses as well), plus SE and also EE courses (i.e. digital electronics' fundamentals / computer ALU/CPU design).
[+] joedo3|8 years ago|reply
Scientists use the scientific method to make testable explanations and predictions about the world. A scientist asks a question and develops an experiment, or set of experiments, to answer that question. Engineers use the engineering design process to create solutions to problems.
[+] KirinDave|8 years ago|reply
I neither agree nor disagree with the article. I think it conflates a lot of stuff.

But look, what the math and science sides of the room throw at us definitely informs the engineering. In every other engineering principle from architecture to ditch digging, there is a feeder system from a variety of mathematical and scientific disciplines. While many other engineering disciplines are well established, they are not immune to this and in general don't begrudge it.

Doctors are required to keep up on the state of treatment. Architects need to keep up on materials science AND new mathematical modeling techniques and tools. Car designers care about new discoveries in lighting, battery and materials technology.

Here's a good example of the kind of stuff we all should be on the hook for. I've tried to push this paper up to the front page a few times now because it's roughly the same as if someone walked up and calmly announced they'd worked out how to compress space to beat the speed of light:

http://www.diku.dk/hjemmesider/ansatte/henglein/papers/hengl...

Folks are generalizing linear sort algorithms to things we thought previously were only amenable to pair-wise comparison sorts without a custom programming model and tons of thought. No! And then a famous engineer-and-also-mathematician made an amazingly fast library to go with it (https://hackage.haskell.org/package/discrimination).

We're seeing multiple revolutions in our industry made of... well... OLD components! While deep learning is starting to break untrodden ground now, a lot of the techniques are about having big hardware budgets, lots of great training data, and a bunch of old techniques. The deep learning on mobile tricks? Why that's an old numerical technique for making linear algebra cheaper by reversing order we walk the chain rule. O(n) general sort is arguably bigger if we can get it into everyone's hands because of how it changes the game bulk data processing and search (suddenly EVERY step is a reduce step!)

We've similarly been sitting on functional programming techniques that absolutely blow anything the OO world has out of the water, but require an up-front investment of time and practice with a completely alternate style of programming. But unlike our fast-and-loose metaprogramming, reflection and monkey patching tricks in industry these techniques come with theorems and programmatic analysis techniques that make code faster for free, not slower.

Even if your day job is, like mine, full of a lot of humdrum plug-this-into-that work, we can benefit from modern techniques to build absolutely rock solid systems with good performance and high reliability. We could be directly incorporating simple concepts like CRDTs to make our systems less prone to error.

It's our job (and arguably it's the hardest job of the field) to dive into the world of pure research, understand it, and bring what's necessary out to the world of modern software. That means more than just tapping away at CSS files, or wailing about NPM security, or shrugging and saying, "Maybe Golang's model is the best we can hope from in modern programmers."

[+] z3t4|8 years ago|reply
What is software Engineering !? Making an excel sheet ? Making a web site ? Writing SQL ? Using programming language X, Y, Z ?
[+] cpburns2009|8 years ago|reply
Software Programming != Computer Science Software Programming and Science != Engineering

While we're drawing distinctions stop calling yourself an engineer unless you're legally licenced as one. Programming may share similarities with engineering but it lacks the professional accreditation and liability.

[+] sytelus|8 years ago|reply
Computer science is neither about computers nor is a science :).
[+] sytelus|8 years ago|reply
People are obviously not getting this popular joke in academia. Open a book such as Introduction to Algorithms by CLRS and you will see its all about creating algorithms in pseudo-code, proving its correctness, evaluating runtimes etc. Authors not only pass on "systems engineering" but even producing algorithms in actual language that can be compiled on actual computer. Don't get me wrong, I love the book and have gone through every single page and vast majority of excerices twice and truly enjoyed it. It is then when it hits you what computer science is really about.

The folks in fields like mathematics or physics didn't used to consider "Computer Science" as "real science". As fun fact, there were no journals on computer science for quite long time. Researchers like Dijkstra would identify themselves as "Mathematician" and publish their now very well known algorithms in mathematical literature :).

[+] goatlover|8 years ago|reply
It's about the math of computation then, which can be carried out by any arbitrary system that we humans deem as carrying out symbol manipulation.