I banged out my first submodule. Great. Then I wrote a unit test module in that submodule and tried to do some relative imports with it. Got an error: "attempted relative import in non-package".
Huh, that's funny. I could have sworn I had __init__.py files all the way up that chain. Yup, they're all there.
Well it turns out that if you use relative imports, you cannot run that module as main. You must import the module with relative imports within the context of the larger package. So much for writing your unit tests right there in the file.
Oh, but it turns out they have a fix for this! You can set the __package__ variable. If you're running your code as __main__, just catch it before your import and set your __package__ var. I have yet to see an example of this that works, but that's the concept in PEP 366. But wait, now I have to import my top level module to make this work as well! And it has to be in my sys.path. So, what, I end up with only 10 lines of custom code before the import just so I can run a unit test. Sounds good. (/Sarcasm off)
Importing in Python has been nothing short of a disaster for about 10 years, since the days of "ni" (new import). The very first thing I ever wanted to do with "ni" was import a directory of modules so I didn't have to do double-duty on maintenance. It still doesn't do that. and if you read the documentation on package importing, it actually says -- still -- that the reason they can't offer "from
Python has a number of disaster areas like this. Len() is a good one. Another is requiring "self" for methods of a class. Python is a very, very useful language, but I'd argue that for Python 3.0 they focused on fixing a lot of inane nitpicks (print with no parens, anyone, or dict.has_key(), or removing my precious reduce() function) instead of fixing the real problems and inconsitencies.
If you're going to break the language, make it count and make it way better and more consistent, not just better in ways that suit personal pet peeves.