Moshe Zadka: A Labyrinth of Lies(6 hours, 18 minutes ago)

In the 1986 movie Labyrinth, a young girl (played by Jennifer Connelly) is faced with a dilemma. The adorable Jim Henson puppets explain to her that one guard always lies, and one guard always tells the truth. She needs to figure out which door leads to the castle at the center of the eponymous Labyrinth, and which one to certain death (dun-dun-dun!).

I decided that like any reasonable movie watcher, I need to implement this in Python.

First, I implemented two guards: one who always tells the truth, and one who always lies. The guards know who they are, and what the doors are, but can only answer True or False.

guards = [None, None]
doors = ["certain death", "castle"]

class Guard:
    def __init__(self, truth_teller, guards, doors):
        self._truth_teller = truth_teller
        self._guards = guards
        self._doors = doors
    def ask(self, question):
        answer = bool(question(self, self._guards, self._doors))
        if self._truth_teller:
            answer = not answer
        return answer

guards[0] = Guard(True, guards, doors)
guards[1] = Guard(False, guards, doors)

This being a children’s movie, the girl defeats all odds and figures out what to ask the guard: “would he (points to the other guard) tell me that this (points to the door on the left) door leads to the castle?”

def question(guard, guards, doors):
    other_guard, = (candidate for candidate in guards if candidate != guard)
    def other_question(ignored, guards, doors):
        return doors[0] == "castle"
    return other_guard.ask(other_question)

What would the truth-teller answer?


And the liar?


No matter who she asks, now she can count on a lie. After a short exposition, she confidently walks through the other door. It’s a piece of cake!

Thanks to Rina Arstain and Veronica Hanus for their feedback on an earlier draft. All mistakes and issues that remain are my responsibility.

Stack Abuse: How to Write a Makefile - Automating Python Setup, Compilation, and Testing(11 hours, 22 minutes ago)


When you want to run a project that has multiple sources, resources, etc., you need to make sure that all of the code is recompiled before the main program is compiled or run.

For example, imagine our software looks something like this:

main_program.source -> uses the libraries `math.source` and `draw.source`
math.source -> uses the libraries `floating_point_calc.source` and `integer_calc.source`
draw.source -> uses the library `opengl.source`

So if we make a change in opengl.source for example, we need to recompile both draw.source and main_program.source because we want our project to be up-to-date on all ends.

This is a very tedious and time-consuming process. And because all good things in the software world come from some engineer being too lazy to type in a few extra commands, Makefile was born.

Makefile uses the make utility, and if we're to be completely accurate, Makefile is just a file that houses the code that the make utility uses. However, the name Makefile is much more recognizable.

Makefile essentially keeps your project up to date by rebuilding only the necessary parts of your source code whose children are out of date. It can also automatize compilation, builds and testing.

In this context, a child is a library or a chunk of code which is essential for its parent's code to run.

This concept is very useful and is commonly used with compiled programming languages. Now, you may be asking yourself:

Isn't Python an interpreted language?

Well, Python is technically both an interpreted and compiled language, because in order for it to interpret a line of code, it needs to precompile it into byte code which is not hardcoded for a specific CPU, and can be run after the fact.

A more detailed, yet concise explanation can be found on Ned Batchelder's blog. Also, if you need a refresher on how Programming Language Processors work, we've got you covered.

Concept Breakdown

Because Makefile is just an amalgamation of multiple concepts, there are a few things you'll need to know in order to write a Makefile:

  1. Bash Scripting
  2. Regular Expressions
  3. Target Notation
  4. Understanding your project's file structure

With these in hand, you'll be able to write instructions for the make utility and automate your compilation.

Bash is a command language (it's also a Unix shell but that's not really relevant right now), which we will be using to write actual commands or automate file generation.

For example, if we want to echo all the library names to the user:

for file in $(DIRS); do
    echo $$file

Target notation is a way of writing which files are dependent on other files. For example, if we want to represent the dependencies from the illustrative example above in proper target notation, we'd write:

main_program.cpp: math.cpp draw.cpp
math.cpp: floating_point_calc.cpp integer_calc.cpp
draw.cpp: opengl.cpp

As far as file structure goes, it depends on your programming language and environment. Some IDEs automatically generate some sort of Makefile as well, and you won't need to write it from scratch. However, it's very useful to understand the syntax if you want to tweak it.

Sometimes modifying the default Makefile is even mandatory, like when you want to make OpenGL and CLion play nice together.

Bash Scripting

Bash is mostly used for automation on Linux distributions, and is essential to becoming an all-powerful Linux "wizard". It's also an imperative script language, which makes it very readable and easy to understand. Note that you can run bash on Windows systems, but it's not really a common use case.

First let's go over a simple "Hello World" program in Bash:

# Comments in bash look like this

# The line above indicates that we'll be using bash for this script
# The exact syntax is: #![source]
echo "Hello world!"

When creating a script, depending on your current umask, the script itself might not be executable. You can change this by running the following line of code in your terminal:

chmod +x

This adds execute permission to the target file. However, if you want to give more specific permissions, you can execute something similar to the following command:

chmod 777

More information on chmod on this link.

Next, let's quickly go over some basics utilizing simple if-statements and variables:


echo "What's the answer to the ultimate question of life, the universe, and everything?"
read -p "Answer: " number
# We dereference variables using the $ operator
echo "Your answer: $number computing..."
# if statement
# The double brackets are necessary, whenever we want to calculate the value of an expression or subexpression, we have to use double brackets, imagine you have selective double vision.
if (( number == 42 ))
	echo "Correct!"
	# This notation, even though it's more easily readable, is rarely used.
elif (( number == 41 || number == 43 )); then
	echo "So close!"
	# This is a more common approach
	echo "Incorrect, you will have to wait 7 and a half million years for the answer!"

Now, there is an alternative way of writing flow control which is actually more common than if statements. As we all know Boolean operators can be used for the sole purpose of generating side-effects, something like:

++a && b++  

Which means that we first increment a, and then depending on the language we're using, we check if the value of the expression evaluates to True (generally if an integer is >0 or =/=0 it means its boolean value is True). And if it is True, then we increment b.

This concept is called conditional execution and is used very commonly in bash scripting, for example:


# Regular if notation
echo "Checking if project is generated..."
# Very important note, the whitespace between `[` and `-d` is absolutely essential
# If you remove it, it'll cause a compilation error
if [ -d project_dir ]
	echo "Dir already generated."
	echo "No directory found, generating..."
	mkdir project_dir

This can be rewritten using a conditional execution:

echo "Checking if project is generated..."
[ -d project_dir ] || mkdir project_dir 

Or, we can take it even further with nested expressions:

echo "Checking if project is generated..."
[ -d project_dir ] || (echo "No directory found, generating..." && mkdir project_dir)

Then again, nesting expressions can lead down a rabbit hole and can become extremely convoluted and unreadable, so it's not advised to nest more than two expressions at most.

You might be confused by the weird [ -d ] notation used in the code snippet above, and you're not alone.

The reasoning behind this is that originally conditional statements in Bash were written using the test [EXPRESSION] command. But when people started writing conditional expressions in brackets, Bash followed, albeit with a very unmindful hack, by just remapping the [ character to the test command, with the ] signifying the end of the expression, most likely implemented after the fact.

Because of this, we can use the command test -d FILENAME which checks if the provided file exists and is a directory, like this [ -d FILENAME ].

Regular Expressions

Regular expressions (regex for short) give us an easy way to generalize our code. Or rather to repeat an action for a specific subset of files that meet certain criteria. We'll cover some regex basics and a few examples in the code snippet below.

Note: When we say that an expression catches ( -> ) a word, it means that the specified word is in the subset of words that the regular expression defines:

# Literal characters just signify those same characters
StackAbuse -> StackAbuse

# The or (|) operator is used to signify that something can be either one or other string
Stack|Abuse -> Stack
			-> Abuse
Stack(Abuse|Overflow) -> StackAbuse
					  -> StackOverflow

# The conditional (?) operator is used to signify the potential occurrence of a string
The answer to life the universe and everything is( 42)?...
	-> The answer to life the universe and everything is...
    -> The answer to life the universe and everything is 42...
# The * and + operators tell us how many times a character can occur
# * indicates that the specified character can occur 0 or more times
# + indicates that the specified character can occur 1 or more times 
He is my( great)+ uncle Brian. -> He is my great uncle Brian.
							   -> He is my great great uncle Brian.
# The example above can also be written like this:
He is my great( great)* uncle Brian.

This is just the bare minimum you need for the immediate future with Makefile. Though, on the long term, learning Regular Expressions is a really good idea.

Target Notation

After all of this, now we can finally get into the meat of the Makefile syntax. Target notation is just a way of representing all the dependencies that exist between our source files.

Let's look at an example that has the same file structure as the example from the beginning of the article:

# First of all, all pyc (compiled .py files) are dependent on their source code counterparts
	python $<
	python $<	
	python $<

# Then we can implement our custom dependencies
main_program.pyc: math.pyc draw.pyc
	python $<
	python $<	
	python $<

Keep in mind that the above is just for the sake of clarifying how the target notation works. It's very rarely used in Python projects like this, because the difference in performance is in most cases negligible.

More often than not, Makefiles are used to set up a project, clean it up, maybe provide some help and test your modules. The following is an example of a much more realistic Python project Makefile:

# Signifies our desired python version
# Makefile macros (or variables) are defined a little bit differently than traditional bash, keep in mind that in the Makefile there's top-level Makefile-only syntax, and everything else is bash script syntax.
PYTHON = python3

# .PHONY defines parts of the makefile that are not dependant on any specific file
# This is most often used to store functions
.PHONY = help setup test run clean

# Defining an array variable
FILES = input output

# Defines the default target that `make` will to try to make, or in the case of a phony target, execute the specified commands
# This target is executed whenever we just type `make`

# The @ makes sure that the command itself isn't echoed in the terminal
	@echo "---------------HELP-----------------"
	@echo "To setup the project type make setup"
	@echo "To test the project type make test"
	@echo "To run the project type make run"
	@echo "------------------------------------"

# This generates the desired project file structure
# A very important thing to note is that macros (or makefile variables) are referenced in the target's code with a single dollar sign ${}, but all script variables are referenced with two dollar signs $${}
	@echo "Checking if project files are generated..."
	[ -d project_files.project ] || (echo "No directory found, generating..." && mkdir project_files.project)
	for FILE in ${FILES}; do \
		touch "project_files.project/$${FILE}.txt"; \

# The ${} notation is specific to the make syntax and is very similar to bash's $() 
# This function uses pytest to test our source files
	${PYTHON} -m pytest

# In this context, the *.project pattern means "anything that has the .project extension"
	rm -r *.project

With that in mind, let's open up the terminal and run the Makefile to help us out with generating and compiling a Python project:

running make with the makefile


Makefile and make can make your life much easier, and can be used with almost any technology or language.

It can automate most of your building and testing, and much more. And as can be seen from the example above, it can be used with both interpreted and compiled languages.

Real Python: The Real Python Podcast – Episode #16: Thinking in Pandas: Python Data Analysis the Right Way(11 hours, 48 minutes ago)

Are you using the Python library Pandas the right way? Do you wonder about getting better performance, or how to optimize your data for analysis? What does normalization mean? This week on the show we have Hannah Stepanek to discuss her new book "Thinking in Pandas".

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

CubicWeb: Release of CubicWeb 3.28(14 hours, 27 minutes ago)

Hello CubicWeb community,

It is with pleasure (and some delay) that we are proud to annonce the release of CubicWeb 3.28.

The big highlights of this release are:

  • CubicWeb handle content negociation. You can have get entity as RDF when requested in the Accept HTTP Headers (see this commit for instance)
  • CubicWeb has a new dynamic database connection pooler, which replaces the old static one. (see this commit for instance).
  • RQL resultsets now store the variables names used in the RQL Select queries. It should ease the use of rsets and will allow to build better tools (see this commit)
  • CubicWeb now requires python 3.6 as a mimimum.
  • A big upgrade in our CI workflow has been done, both for tests and documentation.
  • The development of CubicWeb has moved to Logilab's heptapod forge.

To get more details about what has been added, modified or removed, you can have a look to the complete changelog published in Cubicweb's documentation.

CubicWeb 3.28 has been published :

CubicWeb 3.29 is now on it way. We will have tomorrow (July 3rd 2020) afternoon a v-sprint (friday-sprint) to work on the documentation of CubicWeb and its satelites. See you there !

Test and Code: 120: FastAPI & Typer - Sebastián Ramírez(16 hours, 48 minutes ago)

FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints.
Typer is a library for building CLI applications, also based on Python type hints.
Type hints and many other details are intended to make it easier to develop, test, and debug applications using FastAPI and Typer.

The person behind FastAPI and Typer is Sebastián Ramírez.

Sebastián is on the show today, and we discuss:

  • FastAPI
  • Rest APIs
  • Swagger UI
  • Future features of FastAPI
  • Starlette
  • Typer
  • Click
  • Testing with Typer and Click
  • Typer autocompletion
  • Typer CLI

Special Guest: Sebastián Ramírez.

Sponsored By:

Support Test & Code : Python Testing for Software Engineering


<p>FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints.<br> Typer is a library for building CLI applications, also based on Python type hints.<br> Type hints and many other details are intended to make it easier to develop, test, and debug applications using FastAPI and Typer.</p> <p>The person behind FastAPI and Typer is Sebastián Ramírez.</p> <p>Sebastián is on the show today, and we discuss:</p> <ul> <li>FastAPI</li> <li>Rest APIs</li> <li>Swagger UI</li> <li>Future features of FastAPI</li> <li>Starlette</li> <li>Typer</li> <li>Click</li> <li>Testing with Typer and Click</li> <li>Typer autocompletion</li> <li>Typer CLI</li> </ul><p>Special Guest: Sebastián Ramírez.</p><p>Sponsored By:</p><ul><li><a href="" rel="nofollow">PyCharm Professional</a>: <a href="" rel="nofollow">Try PyCharm Pro for 4 months and learn how PyCharm will save you time.</a> Promo Code: TESTANDCODE20</li></ul><p><a href="" rel="payment">Support Test & Code : Python Testing for Software Engineering</a></p><p>Links:</p><ul><li><a href="" title="Explosion" rel="nofollow">Explosion</a></li><li><a href="" title="FastAPI" rel="nofollow">FastAPI</a></li><li><a href="" title="Typer" rel="nofollow">Typer</a></li><li><a href="" title="OpenAPI Specification " rel="nofollow">OpenAPI Specification </a></li><li><a href="" title="JSON Schema" rel="nofollow">JSON Schema</a></li><li><a href="" title="OAuth 2.0" rel="nofollow">OAuth 2.0</a></li><li><a href="" title="Starlette" rel="nofollow">Starlette</a></li><li><a href="" title="pydantic" rel="nofollow">pydantic</a></li><li><a href="" title="Swagger UI" rel="nofollow">Swagger UI</a> &mdash; REST API Documentation Tool</li><li><a href="" title="Testing - Typer" rel="nofollow">Testing - Typer</a></li><li><a href="" title="Click" rel="nofollow">Click</a></li><li><a href="" title="Testing Click Applications" rel="nofollow">Testing Click Applications</a></li><li><a href="" title="CLI Option autocompletion - Typer" rel="nofollow">CLI Option autocompletion - Typer</a></li><li><a href="" title="Typer CLI - completion for small scripts" rel="nofollow">Typer CLI - completion for small scripts</a></li></ul>

Reuven Lerner: Level up your Python skills with a supercharged Humble Bundle!(1 day, 6 hours ago)

Want to improve your Python skills?

Yeah, I know. Of course you do.

Well, then you should grab an amazing deal from Humble Bundle, with content from a bunch of online Python trainers — including me!

Buying the bundle not only gives you access to some amazing Python training at a great price. It also supports the Python Software Foundation (which handles the administrative side of the Python language and ecosystem) and Race Forward (which works to improve race relations in the US).

There are three tiers to the bundle, and I have a course in each one:

  1. Comprehending Comprehensions
  2. Object-oriented Python
  3. Any one cohort of Weekly Python Exercise

Included in the bundles are also courses and books from Michael Kennedy, Trey Hunner, Matt Harrison, PyBites (Bob and Julian), Real Python (Dan Bader), and Cory Althoff. Plus it includes a subscription to the PyCharm editor.

So don’t delay! Sign up for this Humble Bundle, improve your Python, help two good causes, and save some money. But it’s only available for another 20 days, so don’t delay!

Sign up here:

The post Level up your Python skills with a supercharged Humble Bundle! appeared first on Reuven Lerner.

Daniel Roy Greenfeld: I'm Teaching A Live Online Django Crash Course(1 day, 7 hours ago)

Live Online Django Training

Course Announcement

On July 16th and 17th of 2020, I'll be running a live instruction of my beginner-friendly Django Crash Course. This is a live interactive class conducted via Zoom conferencing software. We're going to walk through the book together with students. If you get stuck, there will be at least two members of the Feldroy team available to help.

Each course day will have two sessions each 3 hours long, as well as an hour-long break between sessions.

Attendees Receive

  • Hours of instruction in building web apps by noted authors and senior programmers
  • An invite to both July 16th and July 17th class days
  • The Django Crash Course e-book (if you already bought one, we'll send you a discount code for $19.99 off the online class)
  • Membership in our forthcoming online forums when they are activated

Class Prerequisites

  • Basic knowledge of the Python programming language
  • Computer where you are allowed to install software (No work restrictions)
  • Internet fast enough to join online meetings

Topics Covered

  • Setting up a development environment
  • Cookiecutter for rapidly accelerating development
  • Django
    • Forms
    • Class-Based Views
    • Models
    • Templates
    • Admin
  • Writing Django tests
    • PyTest
    • Factories
  • Best practices per Two Scoops of Django
    • Proven patterns for avoiding duplication of work (DRY)
    • Writing maintainable code
    • More secure projects

We're selling the course for the introductory price of just $99 and space is limited, so register today!

PyCharm: PyCharm EAP#3 is out!(1 day, 10 hours ago)

PyCharm EAP #3 is out and it’s almost releasing time!! If you are like us you are also looking forward to the end of the month! We have been talking about new features for the last month and today we will take a deeper look into two very exciting ones. For the full list, check our release notes.

Version Control

As we mentioned before, PyCharm 2020.2 will come with full support for GitHub Pull Requests!

What does it mean? It means that you’ll be able to accomplish pretty much all the needed tasks within the entire pull request workflow without leaving your IDE! Assign, manage, and merge pull requests, view the timeline and in-line comments, submit comments and reviews, and accept changes. All from within the PyCharm UI!

Let’s take a deeper look…

New pull request dedicated view

PyCharm now has one dedicated view that shows all the information you need to analyze one or more pull requests. You can simply click on any listed PR and access its information including messages, branches, author, assignee, changed files, commits, timeline, comments, and more.

View the results of pre-commit checks in the Timeline

At the bottom of the timeline, you find a panel showing the results of your checks as they appear, helping you with reviewing your pull requests and fixing issues.

Start, request and submit reviews from within PyCharm

Reviews are a very important step in this flow, and in the new UI you have everything you need to perform tasks in every stage of your reviewing process. Add/remove comments, use the dedicated window to check differences between files, resolve issues, and do a lot more without leaving PyCharm.

Merge your pull requests from within the PyCharm

Merging your pull request into master was not easy until PyCharm 2020.1. Even though it was possible with some workarounds, the process was not straightforward. It has changed in 2020.2. Now you can easily merge your PR as well as rebase & merge or squash & merge.

We are excited about the new PR flow, and we will bring more information about what else is supported in the future. For now, let’s talk about another very nice new feature that we are very proud of.

Debug failed tests

Talking about coding and not talking about testing is not a good idea. Even though a lot of Python developers don’t write tests regularly, we believe that it should be a very important part of professional developers’ workflow.

When tests are passing it’s all happiness, but what happens when they fail? Well, for those of you who write tests and run it under the debugger we have very nice news! PyCharm can now automatically stop on an exception breakpoint in your test without needing you to explicitly set it beforehand.

It means that when your test fails and you are running under the debugger PyCharm will understand it, stop the execution and show you exactly where the problem is happening to provide a shorter feedback loop for debugging problems in failed tests. Check it out:

New in PyCharm

Try it now!

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP. If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP and stay up to date. You can find the installation instructions on our website.

Philippe Normand: Web-augmented graphics overlay broadcasting with WPE and GStreamer(1 day, 10 hours ago)

Graphics overlays are everywhere nowadays in the live video broadcasting industry. In this post I introduce a new demo relying on GStreamer and WPEWebKit to deliver low-latency web-augmented video broadcasts.

Readers of this blog might remember a few posts about WPEWebKit and a GStreamer element we at Igalia worked on …

EuroPython: EuroPython 2020: Our keynotes(1 day, 15 hours ago)

We’re happy to announce our keynote lineup for EuroPython 2020.

Guido van Rossum - Q&A

In this session, you’ll get a chance to get your questions answered by Guido van Rossum, our retired BDFL.

In order to submit a question, please use the following Google form: Guido van Rossum Q&A: Question Submission.


Siddha Ganju - 30 Golden Rules of Deep Learning Performance

“Watching paint dry is faster than training my deep learning model.”
“If only I had ten more GPUs, I could train my model in time.”
“I want to run my model on a cheap smartphone, but it’s probably too heavy and slow.”

If this sounds like you, then you might like this talk.

Exploring the landscape of training and inference, we cover a myriad of tricks that step-by-step improve the efficiency of most deep learning pipelines, reduce wasted hardware cycles, and make them cost-effective. We identify and fix inefficiencies across different parts of the pipeline, including data preparation, reading and augmentation, training, and inference.

With a data-driven approach and easy-to-replicate TensorFlow examples, finely tune the knobs of your deep learning pipeline to get the best out of your hardware. And with the money you save, demand a raise!


Naomi Ceder - Staying for the Community: Building Community in the face of Covid-19

Python communities around the world, large and small are facing loss - from the loss of in person meetups and conferences to the loss of employment and even the potential loss of health and life. As communities we are all confronting uncertainty and unanswered questions. In this talk I would like to reflect on some of those questions. What are communities doing now to preserve a sense of community in the face of this crisis? What might we do and what options will we have for coming events? How can we build and foster community and still keep everyone safe? What challenges might we all face in the future? What sources of support can we find? What are our sources of optimism and hope?


Alejandro Saucedo - Meditations on First Deployment: A Practical Guide to Responsible Development

As the impact of software increasingly reaches farther and wider, our professional responsibility as developers becomes more critical to society. The production systems we design, build and maintain often bring inherent adversities with complex technical, societal and even ethical challenges. The skillsets required to tackle these challenges require us to go beyond the algorithms, and require cross-functional collaboration that often goes beyond a single developer. In this talk we introduce intuitive and practical insights from a few of the core ethics themes in software including Privacy, Equity, Trust and Transparency. We cover their importance, the growing societal challenges, and how organisations such as The Institute for Ethical AI, The Linux Foundation, the Association for Computer Machinery, NumFocus, the IEEE and the Python Software Foundation are contributing to these critical themes through standards, policy advise and open source software initiatives. We finally will wrap up the talk with practical steps that any individual can take to get involved and contribute to some of these great open initiatives, and contribute to these critical ongoing discussions.


EuroPython 2020 is waiting for you

We’ve compiled a full program for the event:

Conference tickets are available on our registration page. We hope to see lots of you at the conference from July 23-26. Rest assured that we’ll make this a great event again — even within the limitations of running the conference online.


EuroPython 2020 Team