A collection of computer, gaming and general nerdy things.

Monday, December 15, 2014

Resolutions

Resolutions

I've often been told that New Year's Resolutions are something we tell ourselves to make ourselves feel better. Or at least something along those lines. How many times have I told myself, "I'm going to start working out this year!" Too many to count.
However, something I try to live by is learning or doing something new every day. Especially if it seems small and inconsequential. Small and inconsequential things are the foundations of greater things. Today, I may finally grasp what monads are. You might learn how to make a bechamel. Someone else might learn how to thread a needle. Somewhere, a child is kicking a soccer ball for the first time.
Taken by themselves, these things are small and inconsequential. However, monads are the foundation for complex Haskell (and other functional) programs. Bechamels are used to form all sorts of sauces. Threading a needle leads to becoming a designer. That kid kicking a ball for the first time might become the next Pele.
Learning these small things lead to so much more. Giving up after not noticing any progress after a week, two weeks, a month. But if you stick with it and push yourself every day on some front, you'll be able to look back and say, "I didn't even know what a roux was back then. Now, I run a restaurant!"

Why share all this?

I've made a short list of resolutions that quite honestly, will mutate and change over the year and the years to come. These things are specific to me so you might be wondering, "Alec, why do I care?" It's fine if you don't. However, I'm sharing them in an attempt to keep myself accountable for what I set out to do. In fact, being more public is one of my goals.

1: Becoming Less Introverted

I don't find anything necessarily wrong with being introverted. It's gotten me this far in life, why not another 26 years? I'm not expecting to become the life of the party and go out every night to hang out with a large group of people. That's not me, nor do I envision that happening.
However, just a few days I gave a presentation at my local Python meetup. I knew my topic well (Decorators) but speaking publically is terrifying to me. Even though I had done everything I'd ever heard of when attempting to give a presentation, I got really nervous. Not because I was worried about the well-known guys in the room, but because I don't like being the center of attention at all -- so much to the point, I don't tell many people when my birthday is. I've done things where I'm forced to be the center of attention at least some of the time -- running a D&D campaign is kinda hard to do if the DM doesn't draw any attention to himself. But being in front of six friends is quite different than being in front of thirty-five people you don't know.
I don't make friends easy because of this. I'd rather be at home reading something or watching a movie than where ever I am currently. And because of that, I miss out on experiences and potential friends.
It's also hindered me professionally. I don't interview well. I know my stuff when it comes to Python webdev or running a retail store. But in an interview, I choke up and get nervous. I can sell you ice in a snowstorm or whip up a basic CRUD app in no time, but when it comes to selling myself...eh, I'm not so great. Part of it is being introverted and another part of it is my self-esteem. But, I keep finding myself wishing I had taken that chance or done something bold. Instead, I end up beating myself up for something "stupid" I did rather than at least looking for the learning opprotunity in it. Or completely half-assing something so I'm not overly committed when I get cold feet.
Throwing myself into stuff with reckless abandon would be the complete opposite of what I am now. While I don't want to be that, I want to be a little more like that. There's no reason why I shouldn't be able to give a presentation of something I know well to thirty-something people. There's no reason why I shouldn't be able to sell myself.

2: Linux and BASH

I use Linux and BASH every day. The only time I'm not using a Linux system is at work (which uses probably the worst POS system I've ever seen). My laptop's had some variant of Linux on it almost from day one. My tower, my RasPis. I'll even technically count my phone.
However, a few weeks ago my uncle quizzed me on the Linux filesystem. Turns out I don't know as much as I thought I did. Yeah, I got Arch up and running (third times a charm) and I can modify my .bashrc and .bash_alias files to make my life easier. I use ssh and ~/.ssh/config and do most of my work from the command line. But really, at the heart of it, I feel like a script kiddy living in a hacker's world.
Currently, I'm experience a strange networking issue that seems to be two fold:
  • My tower's networking process seems to live in a quantum state of running and stopped. And it's always the opposite of what I expect it to be (even if I try tricking it).
  • My bridge router selectively hinders traffic behind it. Sometimes the tower connects just fine and other times, I'm about two steps from pulling my hair out. Everything behind the bridge communicates just fine. And the bridge and main router talk just fine. But moving beyond the bridge is...difficult sometimes.
I got no idea what to do. Sure, I can ifconfig eth0 and traceroute and ping and dhcp -r eth0 all day long. But I'm just aping what I've seen other people say. Looks like it's happening right now despite the WiFi dongle.
I'm not saying that I want to be network adminstrator (though, that'd be continuing the family tradition), but I should know enough to actually figure out what the heck is going on here. I know enough to be dangerous but that's about it.
And there's still quite about BASH itself I don't know. I'm not wanting to develop complex shell scripts, but there's so much that can be done with a few lines of BASH that would take many more in Python. I'm not saying BASH is better than Python, but if I want to create sequential backups of a drive and store a log of each backup and do this every night at 3AM, I can do that in a few lines of BASH. Python's great for gluing programs together, BASH is better at this task.

3: Bass

I have a bass. It's a decent Peavy four string. I've played it some over the years, but I never really devoted a ton of time to learning how to play. In high school, I took some lessons, but like people who commit to working out, I gave up after two months of not noticing gains -- despite that they were happening. Now that I'm a little older and a little wiser, I figure it's time to give it a real shot.
Music is something that is important to me. I enjoy listening to it, working with it and sharing it. Why not produce some of my own? Not necessarily forming a band or putting videos on youtube. But just playing to play and enjoy it. Besides, since I stopped working on cars, my hands have gotten soft.

4: Enroll in School

I've always kicked myself for not enrolling in college. Even though I would have ended up an English major -- probably -- if I'd done right out of high school, having a college degree is now more important than ever. I can self-teach myself Python and SQL and Haskell all day but the truth is, I know just enough to get by and recognize that there's so much more that I don't know. The worst part of knowing there's so much more out there is that I don't even know where to start or if I have the right skills for it to make sense to me.
Endofunctors? Linked lists? Pointers? Or even something as seemingly mundane as SOLID. Like I get the idea, but I truly have no idea. It's a thing that does a thing and holds the data.
I know there's the whole circlejerk of "You don't need a degree to get a job in CompSci!" and I'm sure that's technically true: You're not going to bring the piece of paper that says, "Alec Reiter graduate from University of Derp blah blah blah blah" to an interview. But it's what the degree represents, "Alec Reiter has successfully completely a program ensuring he knows the basics of CompSci!"
I can self-teach CompSci and I might learn it better that way than through sitting in a class room two or three times a week. But in twelve years of teaching myself compsci, I've had one interview and I've produced only a handful of non-impressive projects. Just being a 100% honest here. I'm not impressed with any of my works. And that's because I feel like I'm grasping at straws with some of the things I do and write because I don't have the full picture. I feel the inner rockstar wizard ninja programming in me, but I'm not sure how to let it out.
On top of all of that, there's other stuff I'm interested in that seem completely daunting to self-teach (at least to me): Electrical Engineering (ask me about my Grand Unified Coffee Pot sometime), Physics, Mathematics and Philosphy have piqued my interests many times. I'm not saying that I expect to great at all these, but I'd like to know more.

5: Become a marketable Web Developer

This ties in with the last resolution. I can't and won't say that my heart will always lie with Web Development. It's surely a bustling and busy field that is as varied as CompSci in general. But it's what I'm pretty good at right now. I like the internet. I like programming. Why not program the internet?
However, despite the fact that I'm a passable backend programmer (or is the term engineer now?), I'm a dreadful frontend guy. It comes to make the front end and I'm just like, "Javascript...oh wait, this query's busted let me fix that first."
Time to stop that. I can make the most beautiful REST Api in the world, have a gorgeous query and it's all wrapped in a nice neat OOP app, but if I can make even a generic frontend for it...what's the point? I know not ever company is looking for a "full-stack" or "end-to-end" developer -- someone that that implements a feature from database to CSS -- but many are at least looking for someone who can do that in a pinch.
I don't know Javascript. Frankly, I don't want to learn Javascript. An acquaintance told me, "Javascript is what happens when you drink too much, smoke too much and have to design a language in too little time." However, in order to do any modern web programming, Javascript is the only answer. Which saddens me, I'd love to see Python gain a true client side implementation (not just compiling to Javascript). But until that day, I'll guess I'll be learning me a Javascript for great good.
On top of that, I'm resolving myself to using Django. Django's another thing I don't like (though, I've only used it some). I much prefer the completely indifferent Flask (and I mean that affectionately). However, there's precisely zero job opportunities for Flask here -- well, that's probably not entirely true, I just haven't found any. While I may find Flask to be superior and use it for my personal projects, if I want to be marketable I'll have to pick up Django. Besides, being a one trick pony is no-good.

Other stuff

Of course, there's always smaller stuff that I'd like to do. I have a short list of languages I'd like to be passable in:
  • Python
  • BASH
  • Haskell
  • C/C++
  • Javascript
  • SQL
Some of these I know alright or have a decent grasp on and other's not so much. But I feel it's a well rounded list. BASH, Python and Javascript are all scripting languages but lend themselves to wildly different tasks. SQL, to me, is a complete must to anyone interacting with a database (even if it's through an ORM, and especially in that case). C/C++ for "actual programming" or at least to familiarize myself with them, plus many extensions to the other languages are written in C/C++. And Haskell because it interests me and it's radically different to the others. Not to mention the little bit I've learned has already helped me understand things like "The Clean Architecture" (the idea of separating I/O and data transformation completely) and OOP (Haskell's typeclasses are just data holders and rules for how functions interact with the data...sounds like OOP to me!) And of course, I'd like to explore the new hotness languages like D, Clojure/Scala, Go, Rust and others just because.
I'd like to actually stop smoking. I vaped on a custom 50watt box mod for a while and enjoyed it...until it was time to wrap my own coils. Given I have an IGO-W2 with a stripped post, a piece of shit something that's drilled out poorly and a 454 Big Block which has horizontal coils, I'm not making it the easiest on myself. But you know what? I actually enjoyed it otherwise. And if a little bit of elbow grease is needed, all the better.
Working on my car again is another thing. I'd be surprised if there wasn't at least one place in Atlanta where I can rent a bay and some tools for a day. I'm not at the point where I'm gonna rebuild the motor, but there's a few things I need to do to make my ride smoother -- motor mounts, inspecting the suspension, determining if I should replace the tranny fluid (rule of thumb: if you need to ask, the answer's probably don't).
Making use of my Raspberry Pis. I have two. What the heck do I do with them? Operate a coffee pot? A quick and dirty NAS? Emulate a smart TV? Emulate vidya? Run a music server? They just kinda sit there and blink at me from time to time. I kinda feel like Richmond, "And this one: flash, flash, flash and then wait for it. Nothing for a while. Here it comes. Double flash."
I'd love to start playing D&D or at least some game on a regular (or semi-regular) basis again.

Accountability

Like I said, I'm sharing all this to hold myself more accountable. To continue to do that, I'm going to keep writing about these topics. Maybe not every day and not exclusively about these topics, but as much and as often as I can. Different things I've learned or done that have built up into something bigger and better.
I look forward to 2015 and seeing how it shapes and changes my life. Hopefully you are, too!

Sunday, November 16, 2014

Iterate All the Things

Iterate All The Things

So after the rousing success of making int iterable (which I now know I could have done with ForbiddenFruit), I started wondering, "Why aren't classes iterable?"

In [1]:
from itertools import repeat

class IterCls(type):
    
    def __iter__(self):
        return repeat(self)
            
class Thing(metaclass=IterCls):
    
    def __init__(self, frob):
        self.frob = frob
    
    def __iter__(self):
        return repeat(self.frob)

So, that's a thing. Works just like you'd expect. Any class that declares IterCls to be it's metaclass can be iterated over unendingly.

In [2]:
from itertools import islice

list(islice(Thing, 3))
Out[2]:
[__main__.Thing, __main__.Thing, __main__.Thing]

And yes, the __iter__ on the actual Thing class works, too.

In [3]:
t = Thing(4)

list(islice(t, 10))
Out[3]:
[4, 4, 4, 4, 4, 4, 4, 4, 4, 4]

But check this out:

In [4]:
from inspect import getsource

print(getsource(Thing.__iter__))
    def __iter__(self):
        return repeat(self.frob)


Odd, huh? I'm green around the ears with metaclasses so I'm not even 10% what's going on here other than maybe since Thing is an instance of IterCls (classes are objects), it's instance dictionary would defer to the class dictionary in IterCls when looking for methods. Hell if I know right now.

I'm not really sure what you'd with this. Maybe hook some sort of alternative initializer method on there? However, without using some sort of global or stashing class attributes (or are they instance variables in this case?), I don't think it'd be incredibly useful. And the two argument version of iter with a factory function would be much clearer and way less magical. Something like this:

In [5]:
from itertools import count

def ThingFactory(start=0, step=1):
    frobs = count(start, step)
    def maker():
        nonlocal frobs
        return Thing(frob=next(frobs))
    return maker

for f in islice(iter(ThingFactory(), None), 3):
    print(f.frob)
0
1
2

And, in case, you're wondering, yes modules themselves can be made iterable as well. Inspired by fuckit module fuckery.

In [6]:
from runnables import itermodule

print(itermodule)
print(list(islice(itermodule, 4)))
<module 'wtf'>
[4, 4, 4, 4]

It's a module that implements a __iter__. And it just spits out 4 all day long. Code here I was musing about it in ##learnpython on Freenode and one user commented it might maybe possibly be useful as a datatype. Import the module and use it to represent a CSV file for example -- but the real question being, "Why not just use a class in that case?"

Why?

I was bored. Wanted to see what Python would let me get away with in terms of making things iterable. At this point, I'd be confident that everything can be made iterable. Object method?

In [7]:
from functools import partial

class IterMethod:
    def __init__(self, f):
        self.f = f
    def __get__(self, inst, cls):
        f = self.f
        if inst:
            f = partial(f, inst)
        return repeat(f)

class Thing:
    
    @IterMethod
    def frob(self, frob):
        return frob
    
print("As instance method")
for f in islice(Thing().frob, 2):
    print(f(4), end=' ')

print("\nAs class method")    
for f in islice(Thing.frob, 2):
    print(f(None, 5), end=' ')
As instance method
4 4 
As class method
5 5 

Though, I think a straight up @property would be clearer and probably more in line with what was expected:

In [8]:
class Thing:
    
    def __init__(self):
        self.frobs = count()
    
    @property
    def frob(self):
        return iter(self.frobs.__next__, None)

            
t = Thing()
print(list(islice(t.frob, 5)))
print(list(islice(t.frob, 5)))
[0, 1, 2, 3, 4]
[5, 6, 7, 8, 9]

Iterable function? Use repeat as a decorator...actually, don't really do that. Just use repeat as normal. But in the face of boredom, clearer minds rarely prevail.

In [9]:
@repeat
def frob():
    return 4

fs = [f() for f in islice(frob, 4)]
print(fs)
[4, 4, 4, 4]

I guess don't do this at home? I can't really think of any practical applications for these sorts of things. But if you need to do them...I guess use this as a reference point? Actually, I can think of an application of an iterable function: composing a function N times. I've borrowed the compose function from here:

In [10]:
from functools import reduce

def compose(*functions):
    def compose2(f, g):
        return lambda x: f(g(x))
    return reduce(compose2, functions)

def frob(x):
    return x + 4

n_times = compose(*repeat(frob, 4))
print(n_times(0))
16

If you find any other useful applications, let me know, I'll gladly add them as examples.

Understanding foo.bar()

One of my favorite shows is "How It's Made" -- my enjoyment mostly stems from learning how stuff is made, but the narrator's cheeky puns and jokes certainly add to it. But something I enjoy more than knowing how stuff is put together, is knowing how things work. I don't know what it is, but I have this childlike fascination with opening things up and learning how it fits together, what each part. That was one of my favorite things about my brief stint (a whooping six months!) in the automative service industry: understanding, a little better, how cars work. It certainly opened my eyes to all the work that goes into even simple automotive repairs.

Sadly, I no longer work on or with cars, I do still fiddle some with mine though, and if anyone has a good link to how a transmission -- manual or automatic -- actually works, I'd be thrilled! But this has left me with a hole in my life. One I've recently begun to fill with how Python operates under the hood -- so to speak. While my skills with C -- which basically amount to printf and for loops -- leave me woefully unprepared to examine much of the source, I can examine the surface parts.

To use a car analogy, if reading the C source for Python is repairing a damaged block or transmission, examining how Python works is more similar to replacing motor mounts and broken belts (something I'm regretfully too familiar with on my CRV). Whereas reading someone else's Python is like doing your own fluid changes. Flawed analogies aside, I'd like to more fully examine how Python objects work and what it really means to call foo.bar().

As a forewarning, this knowledge is great for understanding what's happening, but it's not crucial knowledge to working with classes and objects in the regular sense. All the things I will discuss here deal with how Python 3 handles them. Python 2 is slightly different.

Building a Class

To talk about Python's data model and how it relates to classes and objects, we should first write a class. It so basic as to wonder why we're doing it. The point is, rather than examine some fictional class or object, why not have one of our own to open up and poke at?

In [1]:
class Baz:
    
    def __init__(self, thing):
        self.thing = thing
    
    def bar(self):
        print(self.thing)

That's an extremely basic object. The initalizer takes a single argument a method that prints it out. Of course, we need to instantiate it for us to get use out of it.

In [2]:
foo = Baz(1)

Already, there's some mechanisms at work for us. I don't want to get too deep into class creation, but the short take away is the implicit __new__ classes inherit from object handle object creation and __init__ simply sets the initial state of the object for us.. Delving into __new__ hooks into dealing with metaclasses, which is a topic for another time. What I want to focus on today is what happens when we call foo.bar()

Classes and Objects

You'll often hear that objects and classes in Python are simply nothing more than a pile of dictionaries with dotted access. This obtuse phrasing confused me for a long time and it wasn't until I began asking, "How the heck does self actually get passed?" that I began to understand. Asking this began me down a rabbit hole that lead me to descriptors and __getattribute__ and what they do.

The Dict

All classes in Python have an underlying __dict__ and nearly every instance does as well. The first step to foo.bar() is understanding that methods live at the class level.

In [3]:
print('bar' in Baz.__dict__)
print('bar' in foo.__dict__)
True
False

Methods are entries in the class's underlying __dict__ but not in the instance's. Because of this, most Python objects can remain relatively small, they simply store their state rather than all of their available methods as well. What does this method look like in the dictionary?

In [4]:
from inspect import isfunction, ismethod

print(isfunction(Baz.__dict__['bar']))
print(ismethod(Baz.__dict__['bar']))
print(Baz.__dict__['bar'])
True
False
<function Baz.bar at 0x7f1d05a87ea0>

We can see that in the class's dictionary, methods are stored as functions and not as methods. It's reasonable to infer that methods are actually functions that operate on class instances. From here, we can imagine that behind the scenes

In [5]:
Baz.__dict__['bar'](foo)
1

Attribute Access

The next piece of the puzzle is how Python handles attribute access. If you're not familiar with how Python attribute look up happens, in short, it looks like this:

  • Call __getattribute__
  • Is the attribute in the object __dict__?
  • No? Is the attribute in the class's __dict__?
  • No? Is the attribute in any of the parent classes' __dict__?
  • No? Call __getattr__ if present.
  • Else, raise an AttributeError

Python starts at the bottom, calling __getattribute__. This what actually allows the dotted access. You can think of the . in foo.bar to be implicit call to this method. This method translates dictionary look up to dotted access and invokes the rest of the chain. Since we already know that methods live in the class's __dict__ and methods are functions that act on the instance, we'll fast forward to there and extrapolate.

Since methods are functions that live in the class's dictionary and act on instances and __getattribute__ is an implicit transformation from attribute to dictionary look up, we can infer that method calls look like this behind the scenes:

In [6]:
Baz.bar(foo)
1

Methods vs Functions

So far so good. All this is pretty easy to grasp. But there's still burning question of how the heck is self (or rather foo) being passed to our methods. If we examine Baz.bar and foo.bar both, we can see there's a transformation going on somewhere.

In [7]:
print(Baz.bar)
print(foo.bar)
<function Baz.bar at 0x7f1d05a87ea0>
<bound method Baz.bar of <__main__.Baz object at 0x7f1d05a88208>>

Python is some how transforming our function that lives in Baz's dictionary into a method tied to our instance foo. The answer lies in the descriptor protocol. I've written about it else where, and it's probably time to revise it again with my recent understanding. But essentially, descriptors add another rule to our attribute look up. Just before the __getattr__ call: If we recieved a descriptor, call the __get__ method on the descriptor.

This is our missing link. When a function is declared in the class, not only is it placed in the class's dictionary it's also wrapped by a descriptor. Or more accurately, a non-data descriptor because it only defines the special __get__ method. The way descriptors work is by intercepting lookup of specific attributes.

The Descriptor likely has a passing resemblance to this (of course, implemented in C):

In [8]:
from types import MethodType

class MethodDescriptor:
    def __init__(self, method):
        self.method = method
    
    def __get__(self, instance, cls):
        if instance is None:
            return self.method
        return MethodType(self.method, instance)

So, our initial thought of what foo.bar() looks like under the covers was wrong. It more accurately resembles:

In [9]:
Baz.__dict__['bar'].__get__(foo, Baz)()
# if we inspect it we see the truth
print(Baz.__dict__['bar'].__get__(foo, Baz))
1
<bound method Baz.bar of <__main__.Baz object at 0x7f1d05a88208>>

And in fact, if we put our imitation method descriptor into action, it works similarly to how object methods do.

In [10]:
def monty(self, x):
    print(x)

class Spam:
    eggs = MethodDescriptor(monty)
    
    ##of course, it's also useable as a decorator
    @MethodDescriptor
    def bar(self):
        return 4
    
ham = Spam() # a lie if I ever saw one
print(Spam.eggs)
print(ham.eggs)
ham.eggs(1)
print(ham.bar())
<function monty at 0x7f1d045cef28>
<bound method Spam.monty of <__main__.Spam object at 0x7f1d05a780b8>>
1
4

The reason we see a function when we access the bar method when we access it through the class is because the descriptor has already run and decided that it should simply return the function itself.

Saturday, November 8, 2014

Observer Pattern through Descriptors

Recap

In the last post about descriptors I introduced the concept of building an observer pattern with descriptors, something Chris Beaumont almost teases with in his Python Descriptors Demystified. But, I feel he left a lot on the table with that concept.
Before delving deep into the code (and this post is going to be very code heavy), let's recap what we learned last time:
  • Learned about how Python handles attribute access on objects.
  • What the descriptor protocol is and how to briefly implement it
  • Stored the data on the object's __dict__
  • Used a metaclass to handle registering the descriptors for us.
And now for a little bit of code dump to get it active in this notebook as well as reminding us what it looks like:
In [1]:
class Descriptor:
    def __init__(self, name=None, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name

class checkedmeta(type):
    def __new__(cls, clsname, bases, methods):
        # Attach attribute names to the descriptors
        for key, value in methods.items():
            if isinstance(value, Descriptor):
                value.name = key
        # really we should use super rather than type here
        return super().__new__(cls, clsname, bases, methods)

Callbacks

Callbacks are simply actions that run in response to something. They allow external code to react and hook into your code. This style of programming is very common, for example, in Node.js. These can be utilized in Python as well. For now, I'm going to stick with my business crucial to_lower as our callback to give an example before moving on to actually working with the observer pattern.
In [2]:
# pretend this lives at in a package called critical
# and actually does something really useful
def to_lower(value):
    return value.lower()

def print_lower(value):
    print(to_lower(value))
    
#from critical import print_lower
def my_business_logic(value, callback):
    remove = 'aeiou'
    
    value = ''.join([c for c in value if not c.lower() in remove])
    callback(value)
    return value

my_business_logic('Alec Reiter', callback=print_lower)
lc rtr

Out[2]:
'lc Rtr'
Now, the callback could have done anything like updating a database, sending a tweet or simply plug it into a grander processing framework. Node.js uses callbacks for things like error handling on view functions. This is just to give an idea of what's happening in a basic sense. Your code runs and then sends a request to the callback for more action. Implementing call back descriptors is pretty easy.
In [3]:
class CallbackAttribute(Descriptor):
    
    def __init__(self, callback=None, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.callback = callback
    
    def __set__(self, instance, value):
        instance.__dict__[self.name] = value
        if self.callback:
            self.callback(instance, self.name, value)

def frobed_callback(instance, name, value):
    print("Set {} on {!s} to {}".format(name, instance, value))
            
class Thing(metaclass=checkedmeta):
    frob = CallbackAttribute(callback=frobed_callback)
    
    def __init__(self, frob):
        self.frob = frob

foo = Thing(frob=4)
Set frob on <__main__.Thing object at 0x7f0d2c2c6358> to 4

Of course this is an incredibly limited callback descriptor, we're limited to only one callback that's set at class definition time. But it's merely to serve as an example of what's to come.

Observers

According to wikipedia,
The Observer Pattern is a software design pattern in which an object, called the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods. It is mainly used to implement distributed event handling systems.
And according to the Gang of Four:
Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.
The Gang of Four moves on to state that observers and subjects shouldn't be tightly coupled because it reduces the ability to reuse them else where. Put plainly, your subject shouldn't have hard coded logic to call to specific observers. Rather, you should be able to register instances of observers onto an object (or class) and have it call out to them programmatically.
You might run into other names such as Event Handler, PubSub/Publisher-Subscriber, or Signals. These are all variations (to my best understanding) on the pattern with minute but important differences. I won't delve into them, but the take away is that all four of these follow the same basic pattern: An object hooks callbacks which run when they're notified of something.
An easy implementation of this would look like this:
In [4]:
from abc import ABCMeta, abstractmethod

class SubjectMixin:
    """Mixin that will allow an object to notify observers about changes to itself."""
    
    def __init__(self, observers=None, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._observers = []
        if observers:
            self._observers.extend(observers)
    
    def notify(self):
        for observer in self._observers:
            observer.update(self)
    
    def add_observer(self, observer):
        if observer not in self._observers:
            self._observers.append(observer)
    
    def remove_observer(self, observer):
        if observer in self._observers:
            self._observers.remove(observer)
        
class ObserverMixin(metaclass=ABCMeta):
    """Mixin that will allow an object to observe and report on other objects."""
    
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
    
    @abstractmethod
    def update(self, instance):
        return NotImplemented
An initial attempt at this pattern will use inheritance (or interfaces if you're using something like PHP or Java where single inheritance is the only option). The pattern is simple:
  • We store observers in a private (or at least as private as Python allows) list
  • When we need to notify the observers, we do so explicitly by hitting all of them and their update method
Observers are free to implement update in whatever way, but they must implement it. A simple implementation might look like this.
In [5]:
class Person(SubjectMixin):
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name

class PrintLowerNoVowels(ObserverMixin):
    
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
    
    def update(self, instance):
        remove = 'aeiou'
        value = instance.name.lower()
        value = ''.join([c for c in value if not c in remove])
        print(value)

plnv = PrintLowerNoVowels()
me = Person(name="Alec Reiter", observers=[plnv])
me.notify()
lc rtr

This is generally how it's implemented -- at least in most of the articles I read. It's also possible to automate the notification via property. Say, we wanted to notify the observers every time we change the name attribute on a Person instance. We could write that logic every where. Maybe apply it with a context manager or decorator. But, tying it to the object makes the most amount of sense.
In [6]:
class Person(SubjectMixin):
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.__name = None
        self.name = name
    
    @property
    def name(self):
        return self.__name
    
    @name.setter
    def name(self, value):
        if value != self.__name:
            self.__name = value
            self.notify()

me = Person(name="Alec Reiter", observers=[plnv])
lc rtr

If we're concerned about automatically notifying the observers any time an attrbute is changed, we could just override __setattr__ to handle this for us. Which circumvents the needs to write properties for every attribute if this is the only action we're concerned with. It's super easy to implement as well.
In [7]:
class Person(SubjectMixin):
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name
    
    def __setattr__(self, name, value):
        super().__setattr__(name, value)
        self.notify()
        
me = Person(name="Alec Reiter", observers=[plnv])
lc rtr

And that's all well and good. Not to mention a good deal less complicated that what I'm about to delve into. But it's also less fun for me. I'm not going to advocate for one of these implementations over the other except to say the one I'm going to focus on will offer a much finer grain of control.

Watching specific attributes

However, if we're concerned with monitoring specific attributes for changes, descriptors are the correct way to handle this. Why bother emitting an event every time age is changed if we only care about name or email?
The first step is to identify the logic we'd end up repeating in each property and moving that into a seperate object. We'll call this new class WatchedAttribute.
In [8]:
class WatchedAttribute(Descriptor):
    def __init__(self, name=None, *args, **kwargs):
        super().__init__(name, *args, **kwargs)
    
    def __set__(self, instance, value):
        if self.name not in instance.__dict__ or value != instance.__dict__[self.name]:
            instance.__dict__[self.name] = value
            instance.notify()

class Person(SubjectMixin, metaclass=checkedmeta):
    name = WatchedAttribute()
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name
        
me = Person(name="Alec Reiter", observers=[plnv])
me.name = "Alec Reiter"
lc rtr

Now, we can add multiple attributes that are watched without rewriting the property each time to change the variable name. If we split the name attribute into first and last names, if we add an email attribute it's easy. Just add another WatchedAttribute entry on the class level and set it in __init__.
But I feel we can improve on this pattern as well. There's two big things I'm not a fan of with this implementation:
  • We manipulate the underlying dictionary to store the values.
  • The Subject is responsible for notifying the Observers.
We can fix both of these things, but the first will take us down a side road.

Alternative Data Store

The first issue is trickier. We need to relate instances to values without creating a mess we'll have to clean up later, or creating a memory leak that will absolutely murder a long running process. The most effective way of handling both of these is using weak references.

References

CPython (the implementation I'm using) utilizes reference counting to determine if an object should be garbage collected. When an object's reference count drops to 0, it's space in memory can be reclaimed by Python for use else where. Sometimes we only want to hold a reference to an object but not so tightly it won't be garbage collected if we forget about it. Consider this:
In [9]:
print(me)
registry = {"me" : me}
<__main__.Person object at 0x7f0d2c2c64e0>

Storing instances in a dictionary as keys or values (or in a list or set) as a form of caching is extremely common. But if we remove all the other instances of the object laying around...
In [10]:
del me
...that reference is left hanging around:
In [11]:
me = registry['me']
print(me)
<__main__.Person object at 0x7f0d2c2c64e0>

Before this gets too side tracked into weak references, I want to note that they're not a silver bullet and require a little more knowledge about Python to use efficienctly. You can still shoot your foot off with them. In this case, we're not using them prevent cycles but to instead maintain a cache.
Peter Parente wrote about weak references on his blog and while some of the information is out dated (the new module was deprecated in 2.6 and replaced with types), it's still relevant to understanding what weak references are. And Doug Hellman explored the weakref module in his Python Module of the Week series.
But the short of it is that an instance of WeakKeyDictionary, WeakValueDictionary or WeakSet will prevent this. Most things can be weak referenced -- the documentation goes into detail about what can be: "class instances, functions written in Python (but not in C), instance methods, sets, frozensets, some file objects, generators, type objects, sockets, arrays, deques, regular expression pattern objects, and code objects."
When you're attempting to use WeakKeyDictionary or WeakSet, the object must meet one more requirement: hashable. So objects like list or dict, even if they were implemented in Python, can't take advantage of these structures. However, outside of a few corner cases, this restraint won't affect us here.
Implementing it is incredibly easy.
In [12]:
from weakref import WeakKeyDictionary

class WatchedAttribute(Descriptor):
    
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.values = WeakKeyDictionary()
    
    def __get__(self, instance, cls):
        return self.values[instance]
    
    def __set__(self, instance, value):
        if instance not in self.values or value != self.values[instance]:
            self.values[instance] = value
            instance.notify()

class Person(SubjectMixin):
    name = WatchedAttribute()
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name
        
me = Person(name="Alec Reiter", observers=[plnv])
lc rtr

You'll notice the metaclass we were using before is also gone now. That's because since we're storing the information on a cache inside the descriptor, it no longer needs to worry about what name it's being held under.

Descriptors as Subjects

The next issue was moving the publishing of events out of the main object. The main reason for this would be to notify only certain subscribers when an attribute changes but not all of them. This explores what happens when access a descriptor through the class and not an instance. Meaning answering, "What does cls (or type) do on __get__?

Accessing the Descriptor

Since descriptors are objects that just happen to follow a certain protocol that doesn't mean they can't have other methods on it. Or even follow multiple protocols. An object could be both a descriptor and an iterator for example. However, getting to these other methods can be tricky. We obviously can't do it through an instance, Python resolves that access to the __get__ method and returns a value.
This means we have to go through the class. But the way our descriptor is set up, it'll blow up when an instance isn't passed to it. We could simply return the instance of the descriptor when an instance isn't passed...would it work? Spoilers: It does. So we can fully move the registration of observers and notification out into the descriptors and our SubjectMixin can be redefined to work with our descriptor.
Actually, we end up redefining the Descriptor and WatchedAttribute classes as well. Forewarning, this is a bit of a code dump.
In [13]:
from weakref import WeakSet

class SubjectMixin:
    def __init__(self, observers=None, *args, **kwargs):
        self._observers = WeakSet()
        super().__init__(*args, **kwargs)
        
        if observers:
            self._observers.update(observers)
    
    def notify(self, instance):
        for observer in self._observers:
            observer.update(instance)
    
    def add_observer(self, observer):
        self._observers.add(observer)
    
    def remove_observer(self, observer):
        if observer in self._observers:
            self._observers.remove(observer)
In [14]:
class CachingDescriptor(Descriptor):
    
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._instances = WeakKeyDictionary()
    
    def __get__(self, instance, cls):
        if instance is None:
            return self
        return self._instances[instance]
    
    def __set__(self, instance, value):
        self._instances[instance] = value
In [15]:
class WatchedAttribute(CachingDescriptor, SubjectMixin):
    def __init__(self, observers=None, *args, **kwargs):
        super().__init__(observers=observers, *args, **kwargs)
    
    def __set__(self, instance, value):
        super().__set__(instance, value)
        self.notify(instance)
            
class Person:
    name = WatchedAttribute()
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name
        
Person.name.add_observer(plnv)
me = Person(name="Alec Reiter")
lc rtr

There's some subtle changes going on here that you might miss unless you explicitly diff the preceeding implementation of SubjectMixin with this one.
The observer container is changed from a list to a WeakSet. Both are iterable, which means notify doesn't change (at least due to this). WeakSet plays off both the strengths of sets (which only contain unique items) and weak references. The only thing that WeakSet won't handle is keeping the observers in any particular sort of order and dealing with unhashable types -- neither of which affect us. You'll notice that adding elements to a set is slightly different from a list, so it's not a complete drop in replacement.
I will note I went back and forth between using WeakSet and a regular set. Mostly because if we remove all other references from an observer, do we intend to still have the observer still process requests? My thoughts on the matter is no, the observer should be considered dead. In other cases, the goal could be to have "anonymous" observers -- objects that are created and immediately injected into the framework rather than assigned to a name and passed in. If this is the desire, than WeakSet wouldn't keep the object from being immediately garbage collected. I'll leave the pros and cons of both approaches as an exercise to the reader. ;)
The next subtle difference is that SubjectMixin.notify now accepts an instance explicitly. Since we've displaced this logic to the descriptor, passing self to it ends up passing the instance of the descriptor rather than an instance of the class it's attached to.
Other than that, it's just a matter of knowing how multiple inheritance works. Which is a completely separate matter best left to another time. It involves liberal use of super to say the least.
The short of it is that WatchedAttribute combines the methods and data from both Descriptor and SubjectMixin together. Meaning Descriptor can worry about being a descriptor that stores information in a weak ref dictionary. And SubjectMixin can worry about being the basis for observed subjects -- it's applicable for both descriptors and other objects. WatchedAttribute just overrides how Descriptor.__set__ operates (or rather extends it if you want to split hairs) to combine the two fully.

Going Further

We could, of course, go further to registering observers for every instance of an object with the WatchedAttribute and specific instances as well. Implementing this is a just a mite trickier, but not terribly. The first step is to imitate the behavior of collection.defaultdict in WeakKeyDictionary. Emulating defaultdict is pretty straight foward and just depends on defining __missing__, setting a hook for it in __getitem__ and providing a constructor.
The reason for building this is to utilize WeakSet as a way to hold onto observers for us that are local to a particular instance.
In [16]:
class WeakKeyDefaultDict(WeakKeyDictionary):
    
    def __init__(self, default_factory=None, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.default_factory = default_factory
    
    def __getitem__(self, key):
        try:
            return super().__getitem__(key)
        except KeyError:
            return self.__missing__(key)
    
    def __missing__(self, key):
        if not self.default_factory:
            raise KeyError(key)
        value = self.default_factory()
        super().__setitem__(key, value)
        return value
With that built, we can reconstruct WatchedAttribute to hold both "global" and "local" observers.
In [17]:
class WatchedAttribute(CachingDescriptor, SubjectMixin):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._local_observers = WeakKeyDefaultDict(WeakSet)
    
    def __set__(self, instance, value):
        super().__set__(instance, value)
        self.notify(instance)
    
    def add_observer(self, observer, instance=None):
        if instance is None:
            super().add_observer(observer)
        else:
            self._local_observers[instance].add(observer)
            
    def remove_observer(self, observer, instance=None):
        if instance is None:
            super().remove_observer(observer)
        else:
            if observer in self._local_observers[instance]:
                self._local_observers[instance].remove(observer)
            
    def notify(self, instance):
        observers = self._observers | self._local_observers[instance]
        for observer in observers:
            observer.update(instance)
The real question, now, is how does it handle? It should handle the same as previous iterations on WatchedAttribute except for the specific behavior we've overriden here. I'm also going to add some convience methods to the Person class to make it slightly easier to interact with the observers.
In [18]:
class Person:
    name = WatchedAttribute()
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name
    
    def access_watched(self, attr):
        return getattr(self.__class__, attr)
    
    def attach(self, attr, observer, global_=False):
        watched = self.access_watched(attr)
        inst = None if global_ else self
        watched.add_observer(observer, inst)
    
    def detach(self, attr, observer, global_=False):
        watched = self.access_watched(attr)
        inst = None if global_ else self
        watched.remove_observer(observer, inst)
        
class PrintUpper(ObserverMixin):
    
    def update(self, instance):
        print(instance.name.upper())
        

pu = PrintUpper()
me = Person(name=None)
me.attach('name', plnv, global_=True)
me.attach('name', pu)
me.name = "Alec Reiter"
lc rtr
ALEC REITER

In [19]:
other = Person(name="Ol' Long Johnson")
l' lng jhnsn

As we can see, the observer that prints the value of Person.name in upper case is bound only to the first instance of Person, where as the one that strips out the vowels and prints that result is bound to all of instances. It's also possible to create an ignore method that would allow specific instances to ignore certain observers as well. Or even better, create a set of rules that can be followed: "Only invoke this observer if the value doesn't change."
Something I've curiously ignored is pre-subscribing observers. That is to say, when we create the class we attach a predetermined list of observers to the attribute. This is a feature of the original SubjectMixin class and is inherited to WatchedAttribute (or as Raymond Hettinger would put it: WatchedAttribute delegates the work to SubjectMixin).
In [20]:
class Person:
    name = WatchedAttribute(observers=[plnv])
    
    def __init__(self, name, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.name = name
    
    def access_watched(self, attr):
        return getattr(self.__class__, attr)
    
    def attach(self, attr, observer, global_=False):
        watched = self.access_watched(attr)
        inst = None if global_ else self
        watched.add_observer(observer, inst)

me = Person(name="Alec Reiter")
lc rtr

Fin

This method of implementing the observer pattern allows a lot of very fine grained control. I'm not advocating it as a good solution -- or even a workable solution on its own. There's plenty that's left on the table as far as details and issues go. For example, how would this expand to using a messaging queue (ZeroMQ or Redis) to publish events to? Or how does it interact with asyncio or twisted? Integrating this pattern with an existing framework (blinker for example) would probably be the best solution
Rather, it's meant as an introduction to the true power of what you can do with descriptors beyond just making sure a string is all lower case or normalizing floating point numbers to decimal.Decimal instances. Which are valid uses of descriptors, don't take that the wrong way.
Some of the concepts introduced here -- manipulating descriptors on both the instance and class levels -- are used to build tremendously flexible systems. Ever wonder how SQLAlchemy seems to magically treat class attributes as parameters in search queries but then magically they're filled with data on the instance level? Descriptors and that if instance is None check.

Further Reading