Python, being a beautifully designed high-level and interpreter-based programming language, provides us with many features for the programmer’s comfort. But sometimes, the outcomes of a Python snippet may not seem obvious at first sight.
Here’s a fun project attempting to explain what exactly is happening under the hood for some counter-intuitive snippets and lesser-known features in Python.
While some of the examples you see below may not be WTFs in the truest sense, but they’ll reveal some of the interesting parts of Python that you might be unaware of. I find it a nice way to learn the internals of a programming language, and I believe that you’ll find it interesting too!
If you’re an experienced Python programmer, you can take it as a challenge to get most of them right in the first attempt. You may have already experienced some of them before, and I might be able to revive sweet old memories of yours! 😅
PS: If you’re a returning reader, you can learn about the new modifications here (the examples marked with asterisk are the ones added in the latest major revision).
(Optional): One line describing the unexpected output.
💡 Explanation:
Brief explanation of what’s happening and why is it happening.
# Set up code# More examples for further clarification (if necessary)
Output (Python version(s)):
Note: All the examples are tested on Python 3.5.2 interactive interpreter, and they should work for all the Python versions unless explicitly specified before the output.
A nice way to get the most out of these examples, in my opinion, is to read them in sequential order, and for every example:
Carefully read the initial code for setting up the example. If you’re an experienced Python programmer, you’ll successfully anticipate what’s going to happen next most of the time.
Read the output snippets and,
Check if the outputs are the same as you’d expect.
Make sure if you know the exact reason behind the output being the way it is.
If the answer is no (which is perfectly okay), take a deep breath, and read the explanation (and if you still don’t understand, shout out! and create an issue here).
If yes, give a gentle pat on your back, and you may skip to the next example.
PS: You can also read WTFPython at the command line using the pypi package,
$ pip install wtfpython -U
$ wtfpython
Section: Strain your brain!
▶ First things first! *
For some reason, the Python 3.8’s “Walrus” operator (:=) has become quite popular. Let’s check it out,
1.
>>>a'wtf_walrus'>>>a:="wtf_walrus"File"", line1a:="wtf_walrus"^SyntaxError: invalidsyntax>>> (a:="wtf_walrus") # This works though'wtf_walrus'>>>a'wtf_walrus'
2 .
", line1
(a, b=16, 19)
^SyntaxError: invalidsyntax>>> (a, b:=16, 19) # This prints out a weird 3-tuple
(6, 16, 19)
>>>a# a is still unchanged?6>>>b16
💡 Explanation
Quick walrus operator refresher
The Walrus operator (:=) was introduced in Python 3.8, it can be useful in situations where you’d want to assign values to variables within an expression.
defsome_func():
# Assume some expensive computation here# time.sleep(1000)return5# So instead of,ifsome_func():
print(some_func()) # Which is bad practice since computation is happening twice# ora=some_func()
ifa:
print(a)
# Now you can concisely writeifa:=some_func():
print(a)
Output (> 3.8):
This saved one line of code, and implicitly prevented invoking some_func twice.
Unparenthesized “assignment expression” (use of walrus operator), is restricted at the top level, hence the SyntaxError in the a := "wtf_walrus" statement of the first snippet. Parenthesizing it worked as expected and assigned a.
As usual, parenthesizing of an expression containing = operator is not allowed. Hence the syntax error in (a, b = 6, 9).
The syntax of the Walrus operator is of the form NAME:= expr, where NAME is a valid identifier, and expr is a valid expression. Hence, iterable packing and unpacking are not supported which means,
(a := 6, 9) is equivalent to ((a := 6), 9) and ultimately (a, 9) (where a‘s value is 6’)
Similarly, (a, b := 16, 19) is equivalent to (a, (b := 16), 19) which is nothing but a 3-tuple.
▶ Strings can be tricky sometimes
1.
>>>id(a)
140420665652016>>>id("some"+"_"+"string") # Notice that both the ids are same.140420665652016
, "wtf!">>>aisb# All versions except 3.7.xTrue>>>a="wtf!"; b="wtf!">>>aisb# This will print True or False depending on where you're invoking it (python shell / ipython / as a script)False
# This time in file some_file.pya="wtf!"b="wtf!"print(aisb)
# prints True when the module is invoked!
4.
Output (< Python3.7 )
Makes sense, right?
💡 Explanation:
The behavior in first and second snippets is due to a CPython optimization (called string interning) that tries to use existing immutable objects in some cases rather than creating a new object every time.
After being “interned,” many variables may reference the same string object in memory (saving memory thereby).
In the snippets above, strings are implicitly interned. The decision of when to implicitly intern a string is implementation-dependent. There are some rules that can be used to guess if a string will be interned or not:
All length 0 and length 1 strings are interned.
Strings are interned at compile time ('wtf' will be interned but ''.join(['w', 't', 'f']) will not be interned)
Strings that are not composed of ASCII letters, digits or underscores, are not interned. This explains why 'wtf!' was not interned due to !. CPython implementation of this rule can be found here
When a and b are set to "wtf!" in the same line, the Python interpreter creates a new object, then references the second variable at the same time. If you do it on separate lines, it doesn’t “know” that there’s already "wtf!" as an object (because "wtf!" is not implicitly interned as per the facts mentioned above). It’s a compile-time optimization. This optimization doesn’t apply to 3.7.x versions of CPython (check this issue for more discussion).
A compile unit in an interactive environment like IPython consists of a single statement, whereas it consists of the entire module in case of modules. a, b = "wtf!", "wtf!" is single statement, whereas a = "wtf!"; b = "wtf!" are two statements in a single line. This explains why the identities are different in a = "wtf!"; b = "wtf!", and also explain why they are same when invoked in some_file.py
The abrupt change in the output of the fourth snippet is due to a peephole optimization technique known as Constant folding. This means the expression 'a'*20 is replaced by 'aaaaaaaaaaaaaaaaaaaa' during compilation to save a few clock cycles during runtime. Constant folding only occurs for strings having a length of less than 21. (Why? Imagine the size of .pyc file generated as a result of the expression 'a'*10**10). Here’s the implementation source for the same.
Note: In Python 3.7, Constant folding was moved out from peephole optimizer to the new AST optimizer with some change in logic as well, so the fourth snippet doesn’t work for Python 3.7. You can read more about the change here.
▶ Be careful with chained operations
>>> (False==False) in [False] # makes senseFalse>>>False== (Falsein [False]) # makes senseFalse>>>False==Falsein [False] # now what?True>>>TrueisFalse==FalseFalse>>>FalseisFalseisFalseTrue>>>1>0<1True>>> (1>0) <1False>>>1> (0<1)
False
Formally, if a, b, c, …, y, z are expressions and op1, op2, …, opN are comparison operators, then a op1 b op2 c … y opN z is equivalent to a op1 b and b op2 c and … y opN z, except that each expression is evaluated at most once.
While such behavior might seem silly to you in the above examples, it’s fantastic with stuff like a == b == c and 0 <= x <= 100.
False is False is False is equivalent to (False is False) and (False is False)
True is False == False is equivalent to True is False and False == False and since the first part of the statement (True is False) evaluates to False, the overall expression evaluates to False.
1 > 0 < 1 is equivalent to 1 > 0 and 0 < 1 which evaluates to True.
The expression (1 > 0) < 1 is equivalent to True < 1 and
>>>int(True)
1>>>True+1#not relevant for this example, but just for fun2
So, 1 < 1 evaluates to False
▶ How not to use is operator
The following is a very famous example present all over the internet.
The current implementation keeps an array of integer objects for all integers between -5 and 256, when you create an int in that range you just get back a reference to the existing object. So it should be possible to change the value of 1. I suspect the behavior of Python, in this case, is undefined. :-)
Here the interpreter isn't smart enough while executing y = 257 to recognize that we've already created an integer of the value 257, and so it goes on to create another object in the memory.
Similar optimization applies to other immutable objects like empty tuples as well. Since lists are mutable, that's why [] is [] will return False and () is () will return True. This explains our second snippet. Let's move on to the third one,
Both a and b refer to the same object when initialized with same value in the same line.
When a and b are set to 257 in the same line, the Python interpreter creates a new object, then references the second variable at the same time. If you do it on separate lines, it doesn't "know" that there's already 257 as an object.
It's a compiler optimization and specifically applies to the interactive environment. When you enter two lines in a live interpreter, they're compiled separately, therefore optimized separately. If you were to try this example in a .py file, you would not see the same behavior, because the file is compiled all at once. This optimization is not limited to integers, it works for other immutable data types like strings (check the "Strings are tricky example") and floats as well,
>>>a, b=257.0, 257.0>>>aisbTrue
Why didn't this work for Python 3.7? The abstract reason is because such compiler optimizations are implementation specific (i.e. may change with version, OS, etc). I'm still figuring out what exact implementation change cause the issue, you can check out this issue for updates.
>>>some_dict[5.5]
"JavaScript">>>some_dict[5.0] # "Python" destroyed the existence of "Ruby"?"Python">>>some_dict[5]
"Python">>>complex_five=5+0j>>>type(complex_five)
complex>>>some_dict[complex_five]
"Python"
So, why is Python all over the place?
💡 Explanation
Uniqueness of keys in a Python dictionary is by equivalence, not identity. So even though 5, 5.0, and 5 + 0j are distinct objects of different types, since they're equal, they can't both be in the same dict (or set). As soon as you insert any one of them, attempting to look up any distinct but equivalent key will succeed with the original mapped value (rather than failing with a KeyError):
>>> 5.0 in some_dict
True
>>> (5 in some_dict) and (5 + 0j in some_dict)
True">
>>>5==5.0==5+0jTrue>>>5isnot5.0isnot5+0jTrue>>>some_dict= {}
>>>some_dict[5.0] ="Ruby">>>5.0insome_dictTrue>>> (5insome_dict) and (5+0jinsome_dict)
True
This applies when setting an item as well. So when you do some_dict[5] = "Python", Python finds the existing item with equivalent key 5.0 -> "Ruby", overwrites its value in place, and leaves the original key alone.
So how can we update the key to 5 (instead of 5.0)? We can't actually do this update in place, but what we can do is first delete the key (del some_dict[5.0]), and then set it (some_dict[5]) to get the integer 5 as the key instead of floating 5.0, though this should be needed in rare cases.
How did Python find 5 in a dictionary containing 5.0? Python does this in constant time without having to scan through every item by using hash functions. When Python looks up a key foo in a dict, it first computes hash(foo) (which runs in constant-time). Since in Python it is required that objects that compare equal also have the same hash value (docs here), 5, 5.0, and 5 + 0j have the same hash value.
Note: The inverse is not necessarily true: Objects with equal hash values may themselves be unequal. (This causes what's known as a hash collision, and degrades the constant-time performance that hashing usually provides.)
▶ Deep down, we're all the same.
Output:
>>>WTF() ==WTF() # two different instances can't be equalFalse>>>WTF() isWTF() # identities are also differentFalse>>>hash(WTF()) ==hash(WTF()) # hashes _should_ be different as wellTrue>>>id(WTF()) ==id(WTF())
True
💡 Explanation:
When id was called, Python created a WTF class object and passed it to the id function. The id function takes its id (its memory location), and throws away the object. The object is destroyed.
When we do this twice in succession, Python allocates the same memory location to this second object as well. Since (in CPython) id uses the memory location as the object id, the id of the two objects is the same.
So, the object's id is unique only for the lifetime of the object. After the object is destroyed, or before it is created, something else can have the same id.
But why did the is operator evaluated to False? Let's see with this snippet.
As you may observe, the order in which the objects are destroyed is what made all the difference here.
▶ Disorder within order *
fromcollectionsimportOrderedDictdictionary=dict()
dictionary[1] ='a'; dictionary[2] ='b';
ordered_dict=OrderedDict()
ordered_dict[1] ='a'; ordered_dict[2] ='b';
another_ordered_dict=OrderedDict()
another_ordered_dict[2] ='b'; another_ordered_dict[1] ='a';
classDictWithHash(dict):
""" A dict that also implements __hash__ magic. """__hash__=lambdaself: 0classOrderedDictWithHash(OrderedDict):
""" An OrderedDict that also implements __hash__ magic. """__hash__=lambdaself: 0
Output
", line 1, in
TypeError: unhashable type: 'dict'
# Makes sense since dict don't have __hash__ implemented, let's use
# our wrapper classes.
>>> dictionary = DictWithHash()
>>> dictionary[1] = 'a'; dictionary[2] = 'b';
>>> ordered_dict = OrderedDictWithHash()
>>> ordered_dict[1] = 'a'; ordered_dict[2] = 'b';
>>> another_ordered_dict = OrderedDictWithHash()
>>> another_ordered_dict[2] = 'b'; another_ordered_dict[1] = 'a';
>>> len({dictionary, ordered_dict, another_ordered_dict})
1
>>> len({ordered_dict, another_ordered_dict, dictionary}) # changing the order
2">
>>>dictionary==ordered_dict# If a == bTrue>>>dictionary==another_ordered_dict# and b == cTrue>>>ordered_dict==another_ordered_dict# then why isn't c == a ??False# We all know that a set consists of only unique elements,# let's try making a set of these dictionaries and see what happens...>>>len({dictionary, ordered_dict, another_ordered_dict})
Traceback (mostrecentcalllast):
File"", line1, in<module>TypeError: unhashabletype: 'dict'# Makes sense since dict don't have __hash__ implemented, let's use# our wrapper classes.>>>dictionary=DictWithHash()
>>>dictionary[1] ='a'; dictionary[2] ='b';
>>>ordered_dict=OrderedDictWithHash()
>>>ordered_dict[1] ='a'; ordered_dict[2] ='b';
>>>another_ordered_dict=OrderedDictWithHash()
>>>another_ordered_dict[2] ='b'; another_ordered_dict[1] ='a';
>>>len({dictionary, ordered_dict, another_ordered_dict})
1>>>len({ordered_dict, another_ordered_dict, dictionary}) # changing the order2
What is going on here?
💡 Explanation:
The reason why intransitive equality didn't hold among dictionary, ordered_dict and another_ordered_dict is because of the way __eq__ method is implemented in OrderedDict class. From the docs
Equality tests between OrderedDict objects are order-sensitive and are implemented as list(od1.items())==list(od2.items()). Equality tests between OrderedDict objects and other Mapping objects are order-insensitive like regular dictionaries.
The reason for this equality in behavior is that it allows OrderedDict objects to be directly substituted anywhere a regular dictionary is used.
Okay, so why did changing the order affect the length of the generated set object? The answer is the lack of intransitive equality only. Since sets are "unordered" collections of unique elements, the order in which elements are inserted shouldn't matter. But in this case, it does matter. Let's break it down a bit,
>>>some_set=set()
>>>some_set.add(dictionary) # these are the mapping objects from the snippets above>>>ordered_dictinsome_setTrue>>>some_set.add(ordered_dict)
>>>len(some_set)
1>>>another_ordered_dictinsome_setTrue>>>some_set.add(another_ordered_dict)
>>>len(some_set)
1>>>another_set=set()
>>>another_set.add(ordered_dict)
>>>another_ordered_dictinanother_setFalse>>>another_set.add(another_ordered_dict)
>>>len(another_set)
2>>>dictionaryinanother_setTrue>>>another_set.add(another_ordered_dict)
>>>len(another_set)
2
So the inconsistency is due to another_ordered_dict in another_set being False because ordered_dict was already present in another_set and as observed before, ordered_dict == another_ordered_dict is False.
▶ Keep trying... *
defsome_func():
try:
return'from_try'finally:
return'from_finally'defanother_func():
for_inrange(3):
try:
continuefinally:
print("Finally!")
defone_more_func(): # A gotcha!try:
foriinrange(3):
try:
1/iexceptZeroDivisionError:
# Let's throw it here and handle it outside for loopraiseZeroDivisionError("A trivial divide by zero error")
finally:
print("Iteration", i)
breakexceptZeroDivisionErrorase:
print("Zero division error occurred", e)
When a return, break or continue statement is executed in the try suite of a "try…finally" statement, the finally clause is also executed on the way out.
The return value of a function is determined by the last return statement executed. Since the finally clause always executes, a return statement executed in the finally clause will always be the last one executed.
The caveat here is, if the finally clause executes a return or break statement, the temporarily saved exception is discarded.
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
Where exprlist is the assignment target. This means that the equivalent of {exprlist} = {next_value} is executed for each item in the iterable.
An interesting example that illustrates this:
foriinrange(4):
print(i)
i=10
Output:
Did you expect the loop to run just once?
💡 Explanation:
The assignment statement i = 10 never affects the iterations of the loop because of the way for loops work in Python. Before the beginning of every iteration, the next item provided by the iterator (range(4) in this case) is unpacked and assigned the target list variables (i in this case).
The enumerate(some_string) function yields a new value i (a counter going up) and a character from the some_string in each iteration. It then sets the (just assigned) i key of the dictionary some_dict to that character. The unrolling of the loop can be simplified as:
In a generator expression, the in clause is evaluated at declaration time, but the conditional clause is evaluated at runtime.
So before runtime, array is re-assigned to the list [2, 8, 22], and since out of 1, 8 and 15, only the count of 8 is greater than 0, the generator only yields 8.
The differences in the output of g1 and g2 in the second part is due the way variables array_1 and array_2 are re-assigned values.
In the first case, array_1 is bound to the new object [1,2,3,4,5] and since the in clause is evaluated at the declaration time it still refers to the old object [1,2,3,4] (which is not destroyed).
In the second case, the slice assignment to array_2 updates the same old object [1,2,3,4] to [1,2,3,4,5]. Hence both the g2 and array_2 still have reference to the same object (which has now been updated to [1,2,3,4,5]).
Okay, going by the logic discussed so far, shouldn't be the value of list(gen) in the third snippet be [11, 21, 31, 12, 22, 32, 13, 23, 33]? (because array_3 and array_4 are going to behave just like array_1). The reason why (only) array_4 values got updated is explained in PEP-289
Only the outermost for-expression is evaluated immediately, the other expressions are deferred until the generator is run.
When we initialize row variable, this visualization explains what happens in the memory
And when the board is initialized by multiplying the row, this is what happens inside the memory (each of the elements board[0], board[1] and board[2] is a reference to the same list referred by row)
We can avoid this scenario here by not using row variable to generate board. (Asked in this issue).
The values of x were different in every iteration prior to appending some_func to funcs, but all the functions return 6 when they're evaluated after the loop completes.
When defining a function inside a loop that uses the loop variable in its body, the loop function's closure is bound to the variable, not its value. The function looks up x in the surrounding context, rather than using the value of x at the time the function is created. So all of the functions use the latest value assigned to the variable for computation. We can see that it's using the x from the surrounding context (i.e. not a local variable) with:
To get the desired behavior you can pass in the loop variable as a named variable to the function. Why does this work? Because this will define the variable inside the function's scope. It will no longer go to the surrounding (global) scope to look up the variables value but will create a local variable that stores the value of x at that point in time.
Everything is an object in Python, which includes classes as well as their objects (instances).
class type is the metaclass of class object, and every class (including type) has inherited directly or indirectly from object.
There is no real base class among object and type. The confusion in the above snippets is arising because we're thinking about these relationships (issubclass and isinstance) in terms of Python classes. The relationship between object and type can't be reproduced in pure python. To be more precise the following relationships can't be reproduced in pure Python,
class A is an instance of class B, and class B is an instance of class A.
class A is an instance of itself.
These relationships between object and type (both being instances of each other as well as themselves) exist in Python because of "cheating" at the implementation level.
The Subclass relationships were expected to be transitive, right? (i.e., if A is a subclass of B, and B is a subclass of C, the Ashould a subclass of C)
💡 Explanation:
Subclass relationships are not necessarily transitive in Python. Anyone is allowed to define their own, arbitrary __subclasscheck__ in a metaclass.
When issubclass(cls, Hashable) is called, it simply looks for non-Falsey "__hash__" method in cls or anything it inherits from.
Since object is hashable, but list is non-hashable, it breaks the transitivity relation.
Accessing classm or method twice, creates equal but not same objects for the same instance of SomeClass.
💡 Explanation
Functions are descriptors. Whenever a function is accessed as an
attribute, the descriptor is invoked, creating a method object which "binds" the function with the object owning the
attribute. If called, the method calls the function, implicitly passing the bound object as the first argument
(this is how we get self as the first argument, despite not passing it explicitly).
Accessing the attribute multiple times creates a method object every time! Therefore o1.method is o1.method is
never truthy. Accessing functions as class attributes (as opposed to instance) does not create methods, however; so SomeClass.method is SomeClass.method is truthy.
classmethod transforms functions into class methods. Class methods are descriptors that, when accessed, create
a method object which binds the class (type) of the object, instead of the object itself.
Unlike functions, classmethods will create a method also when accessed as class attributes (in which case they
bind the class, not to the type of it). So SomeClass.classm is SomeClass.classm is falsy.
A method object compares equal when both the functions are equal, and the bound objects are the same. So o1.method == o1.method is truthy, although not the same object in memory.
staticmethod transforms functions into a "no-op" descriptor, which returns the function as-is. No method
objects are ever created, so comparison with is is truthy.
Having to create new "method" objects every time Python calls instance methods and having to modify the arguments
every time in order to insert self affected performance badly.
CPython 3.7 solved it by introducing new opcodes that deal with calling methods
without creating the temporary method objects. This is used only when the accessed function is actually called, so the
snippets here are not affected, and still generate methods :)
all([[]]) returns False because the passed array has one element, [], and in python, an empty list is falsy.
all([[[]]]) and higher recursive variants are always True. This is because the passed array's single element ([[...]]) is no longer empty, and lists with values are truthy.
▶ The surprising comma
Output (< 3.6):
", line 1
def h(x, **kwargs,):
^
SyntaxError: invalid syntax
Trailing comma is not always legal in formal parameters list of a Python function.
In Python, the argument list is defined partially with leading commas and partially with trailing commas. This conflict causes situations where a comma is trapped in the middle, and no rule accepts it.
Note: The trailing comma problem is fixed in Python 3.6. The remarks in this post discuss in brief different usages of trailing commas in Python.
▶ Strings and the backslashes
Output:
"
>>> print(r""")
"
>>> print(r"")
File "", line 1
print(r"")
^
SyntaxError: EOL while scanning string literal
In a usual python string, the backslash is used to escape characters that may have a special meaning (like single-quote, double-quote, and the backslash itself).
In a raw string literal (as indicated by the prefix r), the backslashes pass themselves as is along with the behavior of escaping the following character.
True
>>> print(repr(r'wt"f')
'wt\"f'
This means when a parser encounters a backslash in a raw string, it expects another character following it. And in our case (print(r"")), the backslash escaped the trailing quote, leaving the parser without a terminating quote (hence the SyntaxError). That's why backslashes don't work at the end of a raw string.
▶ not knot!
Output:
", line 1
x == not y
^
SyntaxError: invalid syntax">
Operator precedence affects how an expression is evaluated, and == operator has higher precedence than not operator in Python.
So not x == y is equivalent to not (x == y) which is equivalent to not (True == False) finally evaluating to True.
But x == not y raises a SyntaxError because it can be thought of being equivalent to (x == not) y and not x == (not y) which you might have expected at first sight.
The parser expected the not token to be a part of the not in operator (because both == and not in operators have the same precedence), but after not being able to find an in token following the not token, it raises a SyntaxError.
▶ Half triple-quoted strings
Output:
wtfpython
>>> # The following statements raise `SyntaxError`
>>> # print('''wtfpython')
>>> # print("""wtfpython")
File "", line 3
print("""wtfpython")
^
SyntaxError: EOF while scanning triple-quoted string literal">
>>>print('wtfpython''')
wtfpython>>>print("wtfpython""")
wtfpython>>># The following statements raise `SyntaxError`>>># print('''wtfpython')>>># print("""wtfpython")File"", line3print("""wtfpython")
^SyntaxError: EOFwhilescanningtriple-quotedstringliteral
💡 Explanation:
Python supports implicit string literal concatenation, Example,
wtfpython
>>> print("wtf" "") # or "wtf"""
wtf">
''' and """ are also string delimiters in Python which causes a SyntaxError because the Python interpreter was expecting a terminating triple quote as delimiter while scanning the currently encountered triple quoted string literal.
▶ What's wrong with booleans?
1.
# A simple example to count the number of booleans and# integers in an iterable of mixed data types.mixed_list= [False, 1.0, "some_string", 3, True, [], False]
integers_found_so_far=0booleans_found_so_far=0foriteminmixed_list:
ifisinstance(item, int):
integers_found_so_far+=1elifisinstance(item, bool):
booleans_found_so_far+=1
The integer value of True is 1 and that of False is 0.
>>>int(True)
1>>>int(False)
0
See this StackOverflow answer for the rationale behind it.
Initially, Python used to have no bool type (people used 0 for false and non-zero value like 1 for true). True, False, and a bool type was added in 2.x versions, but, for backward compatibility, True and False couldn't be made constants. They just were built-in variables, and it was possible to reassign them
Python 3 was backward-incompatible, the issue was finally fixed, and thus the last snippet won't work with Python 3.x!
Class variables and variables in class instances are internally handled as dictionaries of a class object. If a variable name is not found in the dictionary of the current class, the parent classes are searched for it.
The += operator modifies the mutable object in-place without creating a new object. So changing the attribute of one instance affects the other instances and the class attribute as well.
From Python 3.3 onwards, it became possible to use return statement with values inside generators (See PEP380). The official docs say that,
"... return expr in a generator causes StopIteration(expr) to be raised upon exit from the generator."
In the case of some_func(3), StopIteration is raised at the beginning because of return statement. The StopIteration exception is automatically caught inside the list(...) wrapper and the for loop. Therefore, the above two snippets result in an empty list.
To get ["wtf"] from the generator some_func we need to catch the StopIteration exception,
a=float('inf')
b=float('nan')
c=float('-iNf') # These strings are case-insensitived=float('nan')
Output:
>>>ainf>>>bnan>>>c-inf>>>float('some_other_string')
ValueError: couldnotconvertstringtofloat: some_other_string>>>a==-c# inf==infTrue>>>None==None# None == NoneTrue>>>b==d# but nan!=nanFalse>>>50/a0.0>>>a/anan>>>23+bnan
2.
>>>x=float('nan')
>>>y=x/x>>>yisy# identity holdsTrue>>>y==y# equality fails of yFalse>>> [y] == [y] # but the equality succeeds for the list containing yTrue
💡 Explanation:
'inf' and 'nan' are special strings (case-insensitive), which, when explicitly typecast-ed to float type, are used to represent mathematical "infinity" and "not a number" respectively.
Since according to IEEE standards NaN != NaN, obeying this rule breaks the reflexivity assumption of a collection element in Python i.e. if x is a part of a collection like list, the implementations like comparison are based on the assumption that x == x. Because of this assumption, the identity is compared first (since it's faster) while comparing two elements, and the values are compared only when the identities mismatch. The following snippet will make things clearer,
Immutable sequences
An object of an immutable sequence type cannot change once it is created. (If the object contains references to other objects, these other objects may be mutable and may be modified; however, the collection of objects directly referenced by an immutable object cannot change.)
+= operator changes the list in-place. The item assignment doesn't work, but when the exception occurs, the item has already been changed in place.
When an exception has been assigned using as target, it is cleared at the end of the except clause. This is as if
was translated into
exceptEasN:
try:
foofinally:
delN
This means the exception must be assigned to a different name to be able to refer to it after the except clause. Exceptions are cleared because, with the traceback attached to them, they form a reference cycle with the stack frame, keeping all locals in that frame alive until the next garbage collection occurs.
The clauses are not scoped in Python. Everything in the example is present in the same scope, and the variable e got removed due to the execution of the except clause. The same is not the case with functions that have their separate inner-scopes. The example below illustrates this:
In Python 2.x, the variable name e gets assigned to Exception() instance, so when you try to print, it prints nothing.
Output (Python 2.x):
>>>eException()
>>>printe# Nothing is printed!
▶ The mysterious key type conversion
classSomeClass(str):
passsome_dict= {'s': 42}
Output:
>>>type(list(some_dict.keys())[0])
str>>>s=SomeClass('s')
>>>some_dict[s] =40>>>some_dict# expected: Two different keys-value pairs
{'s': 40}
>>>type(list(some_dict.keys())[0])
str
💡 Explanation:
Both the object s and the string "s" hash to the same value because SomeClass inherits the __hash__ method of str class.
SomeClass("s") == "s" evaluates to True because SomeClass also inherits __eq__ method from str class.
Since both the objects hash to the same value and are equal, they are represented by the same key in the dictionary.
For the desired behavior, we can redefine the __eq__ method in SomeClass
classSomeClass(str):
def__eq__(self, other):
return (
type(self) isSomeClassandtype(other) isSomeClassandsuper().__eq__(other)
)
# When we define a custom __eq__, Python stops automatically inheriting the# __hash__ method, so we need to define it as well__hash__=str.__hash__some_dict= {'s':42}
An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right.
The + in (target_list "=")+ means there can be one or more target lists. In this case, target lists are a, b and a[b] (note the expression list is exactly one, which in our case is {}, 5).
After the expression list is evaluated, its value is unpacked to the target lists from left to right. So, in our case, first the {}, 5 tuple is unpacked to a, b and we now have a = {} and b = 5.
a is now assigned to {}, which is a mutable object.
The second target list is a[b] (you may expect this to throw an error because both a and b have not been defined in the statements before. But remember, we just assigned a to {} and b to 5).
Now, we are setting the key 5 in the dictionary to the tuple ({}, 5) creating a circular reference (the {...} in the output refers to the same object that a is already referencing). Another simpler example of circular reference could be
Iteration over a dictionary that you edit at the same time is not supported.
It runs eight times because that's the point at which the dictionary resizes to hold more keys (we have eight deletion entries, so a resize is needed). This is actually an implementation detail.
How deleted keys are handled and when the resize occurs might be different for different Python implementations.
So for Python versions other than Python 2.7 - Python 3.5, the count might be different from 8 (but whatever the count is, it's going to be the same every time you run it). You can find some discussion around this here or in this StackOverflow thread.
Python 3.7.6 onwards, you'll see RuntimeError: dictionary keys changed during iteration exception if you try to do this.
>>>x=SomeClass()
>>>y=x>>>delx# this should print "Deleted!">>>delyDeleted!
Phew, deleted at last. You might have guessed what saved __del__ from being called in our first attempt to delete x. Let's add more twists to the example.
2.
>>> globals() # oh, it didn't. Let's check all our global variables and confirm
Deleted!
{'__builtins__': , 'SomeClass': , '__package__': None, '__name__': '__main__', '__doc__': None}">
>>>x=SomeClass()
>>>y=x>>>delx>>>y# check if y exists<__main__.SomeClassinstanceat0x7f98a1a67fc8>>>>dely# Like previously, this should print "Deleted!">>>globals() # oh, it didn't. Let's check all our global variables and confirmDeleted!
{'__builtins__': <module'__builtin__' (built-in)>, 'SomeClass': <class__main__.SomeClassat0x7f98a1a5f668>, '__package__': None, '__name__': '__main__', '__doc__': None}
Okay, now it's deleted 😕
💡 Explanation:
del x doesn’t directly call x.__del__().
When del x is encountered, Python deletes the name x from current scope and decrements by 1 the reference count of the object x referenced. __del__() is called only when the object's reference count reaches zero.
In the second output snippet, __del__() was not called because the previous statement (>>> y) in the interactive interpreter created another reference to the same object (specifically, the _ magic variable which references the result value of the last non None expression on the REPL), thus preventing the reference count from reaching zero when del y was encountered.
Calling globals (or really, executing anything that will have a non None result) caused _ to reference the new result, dropping the existing reference. Now the reference count reached 0 and we can see "Deleted!" being printed (finally!).
When you make an assignment to a variable in scope, it becomes local to that scope. So a becomes local to the scope of another_func, but it has not been initialized previously in the same scope, which throws an error.
To modify the outer scope variable a in another_func, we have to use the global keyword.
defanother_func()
globalaa+=1returna
Output:
In another_closure_func, a becomes local to the scope of another_inner_func, but it has not been initialized previously in the same scope, which is why it throws an error.
To modify the outer scope variable a in another_inner_func, use the nonlocal keyword. The nonlocal statement is used to refer to variables defined in the nearest outer (excluding the global) scope.
It's never a good idea to change the object you're iterating over. The correct way to do so is to iterate over a copy of the object instead, and list_3[:] does just that.
>>>some_list= [1, 2, 3, 4]
>>>id(some_list)
139798789457608>>>id(some_list[:]) # Notice that python creates new object for sliced list.139798779601192
Difference between del, remove, and pop:
del var_name just removes the binding of the var_name from the local or global namespace (That's why the list_1 is unaffected).
remove removes the first matching value, not a specific index, raises ValueError if the value is not found.
pop removes the element at a specific index and returns it, raises IndexError if an invalid index is specified.
Why the output is [2, 4]?
The list iteration is done index by index, and when we remove 1 from list_2 or list_4, the contents of the lists are now [2, 3, 4]. The remaining elements are shifted down, i.e., 2 is at index 0, and 3 is at index 1. Since the next iteration is going to look at index 1 (which is the 3), the 2 gets skipped entirely. A similar thing will happen with every alternate element in the list sequence.
Refer to this StackOverflow thread explaining the example
See also this nice StackOverflow thread for a similar example related to dictionaries in Python.
▶ Lossy zip of iterators *
>>>numbers=list(range(7))
>>>numbers
[0, 1, 2, 3, 4, 5, 6]
>>>first_three, remaining=numbers[:3], numbers[3:]
>>>first_three, remaining
([0, 1, 2], [3, 4, 5, 6])
>>>numbers_iter=iter(numbers)
>>>list(zip(numbers_iter, first_three))
[(0, 0), (1, 1), (2, 2)]
# so far so good, let's zip the remaining>>>list(zip(numbers_iter, remaining))
[(4, 3), (5, 4), (6, 5)]
Where did element 3 go from the numbers list?
💡 Explanation:
From Python docs, here's an approximate implementation of zip function,
So the function takes in arbitrary number of iterable objects, adds each of their items to the result list by calling the next function on them, and stops whenever any of the iterable is exhausted.
The caveat here is when any iterable is exhausted, the existing elements in the result list are discarded. That's what happened with 3 in the numbers_iter.
The correct way to do the above using zip would be,
In Python, for-loops use the scope they exist in and leave their defined loop-variable behind. This also applies if we explicitly defined the for-loop variable in the global namespace before. In this case, it will rebind the existing variable.
The differences in the output of Python 2.x and Python 3.x interpreters for list comprehension example can be explained by following change documented in What’s New In Python 3.0 changelog:
"List comprehensions no longer support the syntactic form [... for var in item1, item2, ...]. Use [... for var in (item1, item2, ...)] instead. Also, note that list comprehensions have different semantics: they are closer to syntactic sugar for a generator expression inside a list() constructor, and in particular, the loop control variables are no longer leaked into the surrounding scope."
The default mutable arguments of functions in Python aren't really initialized every time you call the function. Instead, the recently assigned value to them is used as the default value. When we explicitly passed [] to some_func as the argument, the default value of the default_arg variable was not used, so the function returned as expected.
>>>some_func.__defaults__#This will show the default argument values for the function
([],)
>>>some_func()
>>>some_func.__defaults__
(['some_string'],)
>>>some_func()
>>>some_func.__defaults__
(['some_string', 'some_string'],)
>>>some_func([])
>>>some_func.__defaults__
(['some_string', 'some_string'],)
A common practice to avoid bugs due to mutable arguments is to assign None as the default value and later check if any value is passed to the function corresponding to that argument. Example:
some_list= [1, 2, 3]
try:
# This should raise an ``IndexError``print(some_list[4])
exceptIndexError, ValueError:
print("Caught!")
try:
# This should raise a ``ValueError``some_list.remove(4)
exceptIndexError, ValueError:
print("Caught again!")
Output (Python 2.x):
Caught!
ValueError: list.remove(x): xnotinlist
Output (Python 3.x):
", line 3
except IndexError, ValueError:
^
SyntaxError: invalid syntax">
To add multiple Exceptions to the except clause, you need to pass them as parenthesized tuple as the first argument. The second argument is an optional name, which when supplied will bind the Exception instance that has been raised. Example,
some_list= [1, 2, 3]
try:
# This should raise a ``ValueError``some_list.remove(4)
except (IndexError, ValueError), e:
print("Caught again!")
print(e)
Output (Python 2.x):
Caught again!
list.remove(x): x not in list
Output (Python 3.x):
", line 4
except (IndexError, ValueError), e:
^
IndentationError: unindent does not match any outer indentation level">
a += b doesn't always behave the same way as a = a + b. Classes may implement the op= operators differently, and lists do this.
The expression a = a + [5,6,7,8] generates a new list and sets a's reference to that new list, leaving b unchanged.
The expression a += [5,6,7,8] is actually mapped to an "extend" function that operates on the list such that a and b still point to the same list that has been modified in-place.
▶ Name resolution ignoring class scope
1.
x=5classSomeClass:
x=17y= (xforiinrange(10))
Output:
>>>list(SomeClass.y)[0]
5
2.
x=5classSomeClass:
x=17y= [xforiinrange(10)]
Output (Python 2.x):
Output (Python 3.x):
💡 Explanation
Scopes nested inside class definition ignore names bound at the class level.
A generator expression has its own scope.
Starting from Python 3.X, list comprehensions also have their own scope.
▶ Rounding like a banker *
Let's implement a naive function to get the middle element of a list:
This is not a float precision error, in fact, this behavior is intentional. Since Python 3.0, round() uses banker's rounding where .5 fractions are rounded to the nearest even number:
>>>round(0.5)
0>>>round(1.5)
2>>>round(2.5)
2>>>importnumpy# numpy does the same>>>numpy.round(0.5)
0.0>>>numpy.round(1.5)
2.0>>>numpy.round(2.5)
2.0
This is the recommended way to round .5 fractions as described in IEEE 754. However, the other way (round away from zero) is taught in school most of the time, so banker's rounding is likely not that well known. Furthermore, some of the most popular programming languages (for example: JavaScript, Java, C/C++, Ruby, Rust) do not use banker's rounding either. Therefore, this is still quite special to Python and may result in confusion when rounding fractions.
For 1, the correct statement for expected behavior is x, y = (0, 1) if True else (None, None).
For 2, the correct statement for expected behavior is t = ('one',) or t = 'one', (missing comma) otherwise the interpreter considers t to be a str and iterates over it character by character.
() is a special token and denotes empty tuple.
In 3, as you might have already figured out, there's a missing comma after 5th element ("that") in the list. So by implicit string literal concatenation,
No AssertionError was raised in 4th snippet because instead of asserting the individual expression a == b, we're asserting entire tuple. The following snippet will clear things up,
>>> b = "javascript"
>>> assert a == b
Traceback (most recent call last):
File "", line 1, in
AssertionError
>>> assert (a == b, "Values are not equal") :1: SyntaxWarning: assertion is always true, perhaps remove parentheses?
>>> assert a == b, "Values are not equal"
Traceback (most recent call last):
File "
", line 1, in
AssertionError: Values are not equal">
>>>a="python">>>b="javascript">>>asserta==bTraceback (mostrecentcalllast):
File"", line1, in<module>AssertionError>>>assert (a==b, "Values are not equal")
<stdin>:1: SyntaxWarning: assertionisalwaystrue, perhapsremoveparentheses?
>>>asserta==b, "Values are not equal"Traceback (mostrecentcalllast):
File"", line1, in<module>AssertionError: Valuesarenotequal
As for the fifth snippet, most methods that modify the items of sequence/mapping objects like list.append, dict.update, list.sort, etc. modify the objects in-place and return None. The rationale behind this is to improve performance by avoiding making a copy of the object if the operation can be done in-place (Referred from here).
Last one should be fairly obvious, mutable object (like list) can be altered in the function, and the reassignment of an immutable (a -= 1) is not an alteration of the value.
Being aware of these nitpicks can save you hours of debugging effort in the long run.
▶ Splitsies *
>>>'a'.split()
['a']
# is same as>>>'a'.split(' ')
['a']
# but>>>len(''.split())
0# isn't the same as>>>len(''.split(' '))
1
💡 Explanation:
It might appear at first that the default separator for split is a single space ' ', but as per the docs
If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. Consequently, splitting an empty string or a string consisting of just whitespace with a None separator returns [].
If sep is given, consecutive delimiters are not grouped together and are deemed to delimit empty strings (for example, '1,,2'.split(',') returns ['1', '', '2']). Splitting an empty string with a specified separator returns [''].
Noticing how the leading and trailing whitespaces are handled in the following snippet will make things clear,
>>>' a '.split(' ')
['', 'a', '']
>>>' a '.split()
['a']
>>>''.split(' ')
['']
It is often advisable to not use wildcard imports. The first obvious reason for this is, in wildcard imports, the names with a leading underscore don't get imported. This may lead to errors during runtime.
Had we used from ... import a, b, c syntax, the above NameError wouldn't have occurred.
If you really want to use wildcard imports, then you'd have to define the list __all__ in your module that will contain a list of public objects that'll be available when we do wildcard imports.
Unlike sorted, the reversed method returns an iterator. Why? Because sorting requires the iterator to be either modified in-place or use an extra container (a list), whereas reversing can simply work by iterating from the last index to the first.
So during comparison sorted(y) == sorted(y), the first call to sorted() will consume the iterator y, and the next call will just return an empty list.
Before Python 3.5, the boolean value for datetime.time object was considered to be False if it represented midnight in UTC. It is error-prone when using the if obj: syntax to check if the obj is null or some equivalent of "empty."
Section: The Hidden treasures!
This section contains a few lesser-known and interesting things about Python that most beginners like me are unaware of (well, not anymore).
▶ Okay Python, Can you make me fly?
Well, here you go
Output:
Sshh... It's a super-secret.
💡 Explanation:
antigravity module is one of the few easter eggs released by Python developers.
import antigravity opens up a web browser pointing to the classic XKCD comic about Python.
Well, there's more to it. There's another easter egg inside the easter egg. If you look at the code, there's a function defined that purports to implement the XKCD's geohashing algorithm.
▶goto, but why?
fromgotoimportgoto, labelforiinrange(9):
forjinrange(9):
forkinrange(9):
print("I am trapped, please rescue!")
ifk==2:
goto .breakout# breaking out from a deeply nested looplabel .breakoutprint("Freedom!")
Braces? No way! If you think that's disappointing, use Java. Okay, another surprising thing, can you find where's the SyntaxError raised in __future__ module code?
💡 Explanation:
The __future__ module is normally used to provide features from future versions of Python. The "future" in this specific context is however, ironic.
This is an easter egg concerned with the community's feelings on this issue.
The code is actually present here in future.c file.
When the CPython compiler encounters a future statement, it first runs the appropriate code in future.c before treating it as a normal import statement.
>>>from __future__ importbarry_as_FLUFL>>>"Ruby"!="Python"# there's no doubt about itFile"some_file.py", line1"Ruby"!="Python"^SyntaxError: invalidsyntax>>>"Ruby"<>"Python"True
There we go.
💡 Explanation:
This is relevant to PEP-401 released on April 1, 2009 (now you know, what it means).
Quoting from the PEP-401
Recognized that the != inequality operator in Python 3.0 was a horrible, finger-pain inducing mistake, the FLUFL reinstates the <> diamond operator as the sole spelling.
There were more things that Uncle Barry had to share in the PEP; you can read them here.
It works well in an interactive environment, but it will raise a SyntaxError when you run via python file (see this issue). However, you can wrap the statement inside an eval or compile to get it working,
"Python"'))">
from __future__ importbarry_as_FLUFLprint(eval('"Ruby" <> "Python"'))
▶ Even Python understands that love is complicated
Wait, what's this? this is love ❤️
Output:
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
It's the Zen of Python!
>>>love=this>>>thisisloveTrue>>>loveisTrueFalse>>>loveisFalseFalse>>>loveisnotTrueorFalseTrue>>>loveisnotTrueorFalse; loveislove# Love is complicatedTrue
💡 Explanation:
this module in Python is an easter egg for The Zen Of Python (PEP 20).
And if you think that's already interesting enough, check out the implementation of this.py. Interestingly, the code for the Zen violates itself (and that's probably the only place where this happens).
Regarding the statement love is not True or False; love is love, ironic but it's self-explanatory (if not, please see the examples related to is and is not operators).
▶ Yes, it exists!
The else clause for loops. One typical example might be:
defdoes_exists_num(l, to_find):
fornuminl:
ifnum==to_find:
print("Exists!")
breakelse:
print("Does not exist")
The else clause after a loop is executed only when there's no explicit break after all the iterations. You can think of it as a "nobreak" clause.
else clause after a try block is also called "completion clause" as reaching the else clause in a try statement means that the try block actually completed successfully.
▶ Ellipsis *
defsome_func():
Ellipsis
Output
", line 1, in
NameError: name 'SomeRandomString' is not defined
>>> Ellipsis
Ellipsis">
>>>some_func()
# No output, No Error>>>SomeRandomStringTraceback (mostrecentcalllast):
File"", line1, in<module>NameError: name'SomeRandomString'isnotdefined>>>EllipsisEllipsis
💡 Explanation
In Python, Ellipsis is a globally available built-in object which is equivalent to ....
Ellipsis can be used for several purposes,
As a placeholder for code that hasn't been written yet (just like pass statement)
In slicing syntax to represent the full slices in remaining direction
So our three_dimensional_array is an array of array of arrays. Let's say we want to print the second element (index 1) of all the innermost arrays, we can use Ellipsis to bypass all the preceding dimensions
>>>three_dimensional_array[:,:,1]
array([[1, 3],
[5, 7]])
>>>three_dimensional_array[..., 1] # using Ellipsis.array(
Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.
Our site uses cookies. Learn more about our use of cookies: cookie policyACCEPTREJECT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.