Python when possible.
Often performance limitations can be handled via good algorithm selection, and careful use of the standard library/other libraries that use C under the hood.
If it's not possible to reach our performance goals via Python, look at Mojo, which may become the default choice.
- uv to install dependencies, manage python versions, tools and single-file scripts.
- ruff for python linting, auto-formatting.
- ty for python typechecking once it's production ready, with pyright until then.
- Django for full-featured backends.
- FastAPI for standalone/simple APIs.
- Pydantic for data validation.
- HTTPX for making HTTP requests.
- Pytest for testing.
- htmx for reactive web UIs without javascript single-page apps.
- hypothesis for property-based testing/fuzzing.
- Boto3 for interacting with AWS.
- Polars for data analysis/manipulation.
- Numpy for scientific computing.
- tqdm for displaying progress bars.
- matplotlib for plotting.
- Dash for data visualization via web apps.
- Datasette for data exploration/publishing.
- Scrapy for web crawling/scraping.
- BeautifulSoup for HTML parsing.
- Pillow for image manipulation.
- Pygame for 2d games.
- Rich for rich text formatting in terminals.
- Textual for TUIs (text-based user interfaces).
- Ortools for combinatorial optimization algorithms.
- ScikitLearn for machine learning.
- PyTorch for deep learning.
- PostgreSQL as default option.
- sqlite for simpler projects.
- redis as cache layer if needed.
- JSON as interchange format.
- neovim as primary editor.
- cursor for more integrated AI editing.
- Pycharm for graphical debugging.
- git for source control.
- unix tools combined with pipes (
|) to process text generally (awk,sed,wc,grep,tr,diff,uniq,sort,cut,cat,head,tail) - ripgrep as an improvement to
grep. - fd as an improvement to
find. - jq to parse and process JSON.
- parallel for parallel execution on a single machine.
- ghostty terminal emulator.
- zsh shell.
- Excalidraw for diagram creation.
- aider for AI pair programming in the terminal.
- Chunk functionality together, as our working memories can handle roughly seven items.
- Solve a related, but simpler problem.
- State the problem clearly, and then develop incrementally.
- Repeat tasks manually until patterns emerge and functions discover themselves.
- Build classes independently and let inheritance discover itself.
- Consider data structures programming as a graph-traversal problem, traveling from one "island" to another to use appropriate functionality. (strings -> lists -> dictionaries etc.)
- Separate ETL (extract-transform-load) from analysis. Separate analysis from presentation.
- Verify type and size of the data. View and test a subset of the data.
- Humans should never gaze upon unsorted data.
Adapted from The Mental Game of Python Talk given by Raymond Hettinger at PyBay2019