Welcome to the computer science reference hub. This page mirrors the same neon terminal style as our language pages and delivers
a dense glossary of the core ideas you will meet across theory, systems, networking, AI, security, and software engineering.
Keywords & Definitions
Every foundational buzzword with a concise definition so you can translate any syllabus or job spec.
Algorithm β A precise, step-by-step procedure for solving a problem.
Abstraction β Hiding lower-level details so you can reason about a simpler model.
API (Application Programming Interface) β Contract that defines how software components talk to each other.
Architecture β High-level structure of hardware or software systems and how components interact.
Artificial Intelligence β Systems that perform tasks normally requiring human intelligence.
Amortized Analysis β Average cost of operations over a sequence.
Information Theory β Quantifies information (entropy, coding, compression).
Systems & Architecture
Von Neumann Architecture β Single memory for instructions and data.
CPU Pipeline β Fetch, decode, execute, memory, write-back stages.
Instruction Set Architecture (ISA) β Machine-level commands (x86, ARM, RISC-V).
Operating System β Manages processes, memory, files, devices, security.
Process vs Thread β Independent execution contexts vs lightweight scheduled units.
Virtual Memory β Abstraction that gives each process its own address space.
Containers β OS-level virtualization sharing the same kernel.
Edge Computing β Running workloads near data sources to lower latency.
Hardware Internals: CPU, ALU, RAM
When you crack open a modern system-on-chip, you find several tightly coupled subsystems that shuttle data between fast logic units
and comparatively slow memory arrays. Understanding what physically sits on the die or motherboard helps you reason about performance, power, and thermal trade-offs.
Control Unit (CU) β Microcode or hardwired logic that sequences every instruction: it fetches opcodes, decodes them into micro-operations, drives the register file multiplexers, and orchestrates reads/writes to the ALU, FPU, load/store unit, and caches.
Arithmetic Logic Unit (ALU) β Combinational circuits (adder trees, shifters, boolean networks) that execute integer math, comparisons, bitwise transforms, and set condition flags (ZF, CF, OF, SF) consumed by later instructions.
Floating-Point & Vector Units β IEEE-754 compliant pipelines plus SIMD lanes (SSE, AVX, NEON, SVE) that operate on 128β512-bit registers for dense linear algebra, graphics, and ML kernels.
Register File β Nanosecond-speed SRAM cells holding general-purpose registers, architectural state, instruction pointers, and special-purpose counters; often multi-ported so multiple functional units can read/write simultaneously.
Cache Hierarchy β Private L1 instruction/data caches (32β64 KB, ~1 ns) feed each core, mid-sized L2 caches (~1β2 MB) capture working sets, and a shared inclusive/exclusive L3 (last-level cache) buffers tens of MB before main memory.
Branch Predictor & Speculation β Pattern-history tables, TAGE predictors, and return stacks guess future control flow so the pipeline stays full; mispredictions flush the pipeline and waste cycles, so accuracy directly impacts IPC.
Clock Distribution & Pipeline Depth β Phase-locked loops generate multi-gigahertz clocks, while pipelining splits work into 10β30+ stages; deeper pipelines allow higher clocks but amplify penalties from hazards.
Interconnect & Fabric β Ring buses, meshes, or NoC topologies move cache lines between cores, memory controllers, and IO dies, balancing bandwidth with latency.
Random Access Memory (RAM) complements those compute blocks by providing the working dataset:
DRAM Cell β Each bit sits in a tiny capacitor gated by a transistor; because charge leaks, refresh cycles (tREFI/tRFC) constantly rewrite data, which is why DRAM is slower but denser than SRAM.
Rows, Columns, Banks β Accesses activate a row (RAS), then select columns (CAS); bank interleaving lets controllers overlap operations to hide tRCD, tRP, and tCL latencies.
DIMM Modules & Channels β Multiple dual-inline memory modules sit on independent channels to widen bandwidth; dual/quad-channel controllers double or quadruple the data returned per clock.
Integrated Memory Controller (IMC) β Now inside the CPU package, the IMC queues requests, reorders them for efficiency, performs address mapping, and enforces QoS or NUMA policies across sockets.
ECC & Parity β Data centers favor error-correcting code memory that appends parity bits to detect and correct single-bit faults, dramatically boosting reliability for long-running workloads.
Memory Hierarchy Interaction β The OS page cache and MMU translate virtual addresses into physical frames stored in RAM; TLB misses force page table walks, which is why keeping hot pages resident inside RAMβand ultimately inside cachesβmatters.
Networking & Web
OSI / TCP-IP Models β Layered abstractions for data transmission.
IP Addressing β Uniquely identifies devices on a network.
Routing β Moving packets across networks (BGP, OSPF).
DNS β Maps human-readable names to IP addresses.
HTTP / HTTPS β Application protocols powering the web.
REST / GraphQL / gRPC β Styles for designing web APIs.
WebSockets β Full-duplex communication channel over a single TCP connection.
CDN β Distributed servers that cache content close to end users.
Software Engineering Workflow
Requirements Gathering β Capture user and business needs.
System Design β Turn requirements into architecture diagrams and interfaces.
Implementation β Build features with clean code and automated tests.
Code Review β Peer feedback for quality and knowledge sharing.
Testing Pyramid β Unit, integration, end-to-end tests to prevent regressions.
Continuous Integration β Automatically build and test every commit.
Continuous Delivery β Keep main branch deployable with automated releases.