back to article The future of Python: Concurrency devoured, Node.js next on menu

The PyBay 2017 conference, held in San Francisco over the weekend, began with a keynote about concurrency. Though hardly a draw for a general interest audience, the topic – an examination of multithreaded and multiprocess programming techniques – turns out to be central to the future of Python. Since 2008, the Python …

  1. casperghst42

    I do use python, but I still find it very silly that they refuse to implement a switch .. case statement, it causes one to end up with very clunky code.

    1. FluffyERug
      Happy

      Switch Case

      Utterly trivial to implement....

      See https://stackoverflow.com/questions/60208/replacements-for-switch-statement-in-python

      1. Pascal Monett Silver badge

        Trivial ?

        And which of the 10+ examples that absolutely do not function like Switch do you recommend ?

        1. FluffyERug

          Re: Trivial ?

          Actually [None] Switch/Case is a code smell, that can be completely done away with. Especially in languages such as Python.

          1. Roo
            Windows

            Re: Trivial ?

            "Switch/Case is a code smell"

            In this case: He who smelt it dealt it.

            This whole "code smell" thing has become utter bollocks. It is too often used to promote personal prejudice over and above sound engineering backed by objective reasoning, it's become a short cut for "I know better than you".

            As it happens the 'switch' statement is a way of representing a common assembler idiom of a 'jump table', which happens to be fairly efficient - and it rather handily tends to keep all the code local which means it'll fit inside a cache line or two if you are lucky/clever. Not using a switch statement where one would fit nicely would be a code stench in my view - and that's my personal prejudice, but at least I can back it up with some objective reasons why it can fit some scenarios better than the alternatives...

            If you want some code smells to work on I suggest you start with the JVM, then the Java libraries and work your way up to Spring. In the case of the JVM you start with a massive --ing runtime that takes an age (in machine terms) to start and consumes vastly more memory that it actually requires to operate - and it does this because it's the only way it can approach the speed of a compiled language.

            I'm hoping that would keep you busy enough to lay off on the switch statement. :)

          2. Kristian Walsh Silver badge

            Re: Trivial ?

            Switch/Case is a code smell, that can be completely done away with. Especially in languages such as Python.

            If it's a code smell, it's a smell of good design. Switch/case is designed to enforce small sets of values, especially when coupled with enumeration types. "Languages such as Python" don't enforce types at all (by default; I know about Python 3's type hints), so attempting to enforce values is a little meaningless.

            In languages that support it, A switch/case block is telling you something very important about the author's mental model of the code at the time they wrote it. It's saying "At this point in the program, I expect this variable to have one of this limited set of constant values"*

            If you're using enumerations, switch/case additionally allows the compiler to do coverage checking for you (with the warning that your switch block doesn't test for all possible cases).

            If/elif/else cannot convey that information.

            Saying something "can be completely done away with" is not a useful argument. Ultimately, all you need is 'if zero then goto' for any programming language, but filling your code with such constructs strips it of any hint of what the hell you were trying to achieve when you wrote it. There's a strong argument that whole point of having high-level languages in the first place is to capture the intentions of the programmer, because hand-optimised machine code is pretty opaque to a maintainer.

            * There are, sadly, exceptions: Swift's switch/case "value bindings" feature ignores the "limited and constant" nature of switch/case in an attempt to be "helpful", and in doing so reduces the structure down to a pretty-printed way of writing if/elseif/else. If you're using "clever" value bindings in Swift, you really should be using if-elseif-else, because all you're doing with value bindings is hiding one kind of test, if, (i.e., "evaluate expression and compare result") within the language structure, switch/case, normally used for a different kind of test.

            1. Robert Grant

              Re: Trivial ?

              "Languages such as Python" don't enforce types at all (by default; I know about Python 3's type hints), so attempting to enforce values is a little meaningless.

              Python is strongly typed, and enforces those types. Google the difference between strong/weak and static/dynamic typing - it's a good education in programming basics that code boot camps don't always cover. Happy learning! :-)

              1. Kristian Walsh Silver badge

                Re: Trivial ?

                You could have made that objection without coming across as a condescending git, you know.

                I was discussing the type-enforcement features of the language itself: Enumeration types and swich-case illustrate an advantage of time-of-compilation ("static") type knowledge, versus time-of-execution ("dynamic") type knowledge.

                As I was talking about the Python language, it's entirely correct to say that there's no enforcement of data types, because the language itself has no concept of expected types for function arguments. And, while you are also entirely correct that the Python runtime enforces datatypes, that's too late for any feature, such as enumerations, that requires compile-time type knowledge.

          3. Orv Silver badge

            Re: Trivial ?

            I will grant that switch/case is easy to code wrong in some situations. (e.g., forgetting the break; statement, which can lead to subtle bugs.) For long lists of simple values, though, I feel they're a lot less cluttered to read than a long "if/then/else if..." block. The values being tested for end up buried in syntactic clutter in the middle of the line, making them harder to spot.

      2. lesession

        Re: Switch Case

        I love the way the if .. else suggestion gets 167 upvotes.

        167 people who have no idea that the difference is and what the switch case is for ...

        1. kuiash

          Re: Switch Case

          It's for Duff's device isn't it?

          Switches inside loops with gotos and breaks & conditional continues FTW! I helped write a data analysis app back in the 80s. IIRC that's how we treated resampling. Why? I don't know. Guess we though it was clever!

          Oh. Switch is also useful in for breaking C/C++ compatibility.

        2. Brewster's Angle Grinder Silver badge

          Re: Switch Case

          "167 people who have no idea that the difference is and what the switch case is for ..."

          You'll have to enlighten me, then.

          Back in the day, I remember C compilers that would sometimes generates lookup tables for switch statements. But mainly they ended up as the asm equivalent of if-then statements.

          And while I'm here, Duff's device is known to be a performance handicap. Famously, removing numerous instances from the X server reduced code side and increased executable speed.

          1. Roo

            Re: Switch Case

            "And while I'm here, Duff's device is known to be a performance handicap"

            I think a lot of people neglected to pay attention to what Tom Duff was trying to achieve: specifically "loop unrolling" with a compiler that didn't do it - with a minimum of code.

            He was also counting cycles on some fairly exotic big iron - which had a very different set of strengths & weaknesses in comparison to *most* that followed it (eg: memory that keeps up with the core clock, small to zero caches and maybe a couple of cycles max for a memory fetch).

      3. Dan 55 Silver badge

        Re: Switch Case

        All you're showing is Stack Overflow at its worst.

        1. kuiash

          Re: Switch Case

          Here's Linus at his best ranting about boolean switch statement warnings...

          https://lkml.org/lkml/2015/5/27/941

          LOL!

    2. Charlie Clark Silver badge

      I do use language X, but I still find it very silly that they refuse to implement a Y statement

      Pretty much true of all programming languages. I write a lot of Python code and find dispatching much preferable to the SWITCH statement.

  2. Herby Silver badge

    I'll wait...

    For python 4.

    It will happen someday, and then the 2/3 mess will be behind us. Until then, I'll keep my whitespace to myself, one tab at a time.

    1. Brewster's Angle Grinder Silver badge

      Re: I'll wait...

      The javascript split (ecmascript 4) held up the language for a decade. And there was also a lost decade between C++98 to C++11. (And I've lost track of what's happening with Perl 6.) What was it with the early 2000s?

    2. Ucalegon

      Re: I'll wait...

      No pun intended?

    3. foxyshadis

      Re: I'll wait...

      Good luck with that; PHP seems to be the only language interested in major versions anymore, and its major versions would be minor versions to any other language. Python is probably going to be asymptotically on 3 forever.

      1. Orv Silver badge

        Re: I'll wait...

        Perhaps, like TeX, they should have approximated pi more closely with each revision.

  3. Anonymous Coward
    Anonymous Coward

    RIP GIL

    They need to remove the GIL.

    It's very easy with event driven programming to accidentally block the event loop with a long running operation - and much like Windows 3 message loops, doing so blocks pretty much everything else, which is rather absurd in this day and age.

    So async programming works best when you can:

    1. Either have multiple concurrent event loops - say 1 thread per event loop, so that's Twisted out of the picture - but you do have to remember not to pass objects that won't be thread safe out of their originating thread,

    2. Or you're happy to push long running objects onto their own pool threads (somehow returning in a timely manner to the event loop if the pool is full) - again remembering thread safety.

    One starts to realise why COM marshalling got so complex.

    PS If anyone from the Twisted project is reading this, why did you have to make Deferred such an awful API? Any plans to fix it ever?

    1. thames

      Re: RIP GIL

      Ah yes, the GIL, the favourite whipping boy of people who have heard of Python but don't actually have much experience with it. There are four major independent implementations of Python which are intended for serious commercial use. Two have a GIL, and two don't. The two that don't have a GIL have (or had) major corporate backing, while the ones that do have a GIL did not. Oddly enough, people prefer the versions that have a GIL over the ones that don't by a huge margin. It seems that the GIL isn't a serious enough concern for people who actually write Python software to really be bothered about it.

      1. Anonymous Coward
        Anonymous Coward

        Re: RIP GIL

        Oooh Mr Clever, that told us.

    2. Charlie Clark Silver badge

      Re: RIP GIL

      You only need to remove the GIL for better parallelism (on multiple cores), asyncio does the job for concurrency. Of course, now that multicore environments are becoming ubiquitous, the need to use them effectively is increasing but processor locking has always had advantages.

      Larry Hastings gave an excellent talk last year on his attempts and progress on removing the GIL.

    3. Alan Johnson

      Re: RIP GIL

      Yes - who write this stuff both are necessary and neither replaces the other.

      I do not program in Python but the two concepts of event driven programming and multi-threaded or concurrent programming are seperate, do not address the same issues and are frequently complementary.

      The problem with event driven programming is that the event loop must not be blocked for a significant (whatever that means) length of time. It is necessary and normal therefore to have event queues etc and multiple threads to handle operations which are long and do not make sense to break up into smaller sub-operations. If realtime matters then having a single event queue and processing is a major issue because it imposes FIFO scheduling on the system which is almost certainly not appropriate.

  4. John Smith 19 Gold badge
    Unhappy

    So the 80's are back. Co-operative multi tasking.

    Because the Windows event loop showed it works soooooo well.

    1. Brewster's Angle Grinder Silver badge

      Re: So the 80's are back. Co-operative multi tasking.

      Windows wasn't so much cooperative multitasking as competitive multitasking, with every thread competing to see who could get the biggest slice of the processor. But within an app the coroutines are all under your control so you can bludgeon offenders into submission.

      It doesn't fit all use cases. But it can help in some situations.

      1. John Smith 19 Gold badge
        Unhappy

        " with every thread competing to see who could get the biggest slice of the processor."

        Until one of the procedures on the massive case statement (swallows all them messages and the whole system goes TITSUP).

        There's a reason Windows eventually went preemptive, other than NT being built by a team with experience of writing an actual production grade OS.

        1. Brewster's Angle Grinder Silver badge

          Re: " with every thread competing to see who could get the biggest slice of the processor."

          Cooperative multitasking was always a silly idea at the OS level. (Calling it "competitive multitasking" was meant to be disparaging.) And it wasn't hard to do; hell I wrote apps that did it internally on DOS and I'd come from eight bit micros that had it (OS9) so it was ludicrous Microsoft didn't do it (although, without hardware memory protection, it would've always be a crap shoot).

          At an application level, however, it is a very different beast. You want an app to work so a blocking coroutine is bug. But, as I say, it's not right for every situation. I've been playing with it for a good while in javascript and I still mix and match it with background threads.

  5. Adam 52 Silver badge

    Python 3 split over?

    From the PySpark documentation I was reading this morning:

    "PySpark requires Python 2.6 or higher. PySpark applications are executed using a standard CPython interpreter in order to support Python modules that use C extensions. We have not tested PySpark with Python 3 or with alternative Python interpreters"

    1. Ken Hagan Gold badge

      Re: Python 3 split over?

      Python3 is not *that* much of a change. Yes there are breaking changes, but none should trouble a competent programmer if the code is under active maintenance so if anyone is presenting code in 2017 and spreading FUD about 3 then you should avoid them. They do not understand their chosen implementation language and that is never going to end well.

      1. Anonymous Coward
        Anonymous Coward

        Re: Python 3 split over?

        @Ken Hagen "...Yes there are breaking changes, but none should trouble a competent programmer..."

        *

        I suppose this is true. I'm not a professional programmer, but back in the Python 1.5 days around the year 2000 I started writing a fair number of Python utilities and tools for my own use. All of these programs survived without much (or any) maintenance till around 2014. That was when Red Hat announced that Python3 was the go-to version for the future of Fedora (although they continued support for 2.7, and still continue that support today).

        *

        This change made me convert all my tools and utilities from Python 2.7 to 3.x. The 2to3 utility absolutely didn't find everything. Converting tkinter (windowed) programs was a REAL pain. Bottom line -- I spent a lot of my spare time over about a year doing the conversion. No tweaks or improvements just one-to-one functional conversion.

        *

        My beef is that this conversion provided me with ABSOLUTELY NO VALUE....everything looks and runs just the same as it always did. But Guido has got the print statement converted to print(). Yeh!!

      2. Anonymous Coward
        Anonymous Coward

        Re: Python 3 split over?

        That's the problem, Ken. Python3 made just enough breaking changes to annoying programmers, without fixing major design flaws.

        I still use Python (any version) for small things where it's convenient. The stability is nice. But I haven't taken it seriously as a language since the 3.0 release.

        1. Charlie Clark Silver badge

          Re: Python 3 split over?

          Python3 made just enough breaking changes to annoying programmers, without fixing major design flaws.

          While it's arguable that Python 3 did actually fix some (but not all) design flaws, doing so brought some unnecessary incompatibility (unicode) and and a considerable performance cost. However, since Python 3.5 performance is generally on a par with Python 2 and asyncio does offer new opportunities.

          Some systems will stick with Python 2 for as long as possible because they just work and the costs associated with migration far outweigh the benefits. But this is true of many systems and why virtualisation is so important.

          But for the last few years lots of projects have added Python 3 support and new ones are written exclusively for it. This means that newer programmers rarely face any problems.

          There are lessons to be learned from 2/3 and we can only hope that future changes in the language are handled with a greater understanding for the maintenance of existing libraries and applications. I think that the shift to time-based releases under Larry Hastings is evidence of this.

          1. foxyshadis

            Re: Python 3 split over?

            Programmers who consider Unicode an "unnecessary incompatibility" are the reason why so much software is fundamentally broken anytime it encounters anything that isn't Latin-1. I don't know about you, because you probably never had to touch foreign words or names at all, but Code Pages were a damned nightmare to anyone who actually wanted to do things right.

            It really isn't that difficult to figure out bytes vs strings. You guys have had 10 years to wrap your heads around it, and all you have to do is do the right thing. It's not like Python 2.7 is going anywhere, literally all you have to do is convert your shell files from calling python to python2 to make them work, but you're too incompetent to even do that!

            This is literally no different from the worthless sysadmins that still complain about Perl 6 and Linux 3, because it violates their comfortable safe space, and they just want to get paid to never have to learn anything ever again.

            1. Anonymous Coward
              Anonymous Coward

              Re: Python 3 split over?

              @foxyshadis, I've left you the space below in case you feel the need to rant about anyone else:

              <rant>

              .

              .

              .

              .

              .

              .

              .

              .

              .

              </rant>

            2. Charlie Clark Silver badge

              Re: Python 3 split over?

              I don't know about you, because you probably never had to touch foreign words or names at all

              Seeing as I live in Germany I have to do it a lot…

              I understand the difference between bytes and strings just fine but it wasn't until u"" was restored in Python 3.3 that porting from 2 to 3 felt less like shooting yourself in the foot. Keeping the literal around wouldn't have cost anything and would have kept a lot of goodwill and would undoubtedly have brought the ports of many projects forward.

    2. thames

      Re: Python 3 split over?

      I don't use Apache Spark myself, but there are apparently lots of people using it with Python 3. Python 2 is the default, but you select Python 3 by setting an environment variable.

  6. Kevin McMurtrie Silver badge

    Async not always easy

    Async I/O is not necessarily easier than multi-threading. Blocking I/O is trivially simple but it consumes a thread for an unknown duration. The workaround is multiple threads, and that's where it might become hard. Async I/O is tricky to stream because the control is reversed - you read data that is pushed to you and write data that is pulled from you. Coders can still create bugs by allowing events to complete while the program is no longer in a state to accept them. My preference is for both blocking and async mechanisms to be available since they have different advantages and disadvantages. I also like using async tasks for a lot more than I/O.

    An interesting note is that Jython supports threads. I used it for a while and there were few threading problems specific to the Python language itself. All it needs is some coordination classes for semaphores and piping data between threads. With machines easily having 32+ hardware threads, it's stupid to say that you need to launch 32+ copies of your app to use them.

    1. Anonymous Coward
      Anonymous Coward

      Re: Async not always easy

      You might find Actor model or Communicating Sequential Processes interesting. The latter is particularly good in my opinion, solving (well, highlighting) many of the theoretical problems that exist with systems using multi paths of executions. ZeroMQ is an excellent Actor Model formulation, has many very useful features to recommend it.

      A quick summery of the difference. In actor model programming (i.e. everyone's interpretation of async io), a sender can send a message with no knowledge as to whether the receiver has (or will) receive it. In CSP, the sender blocks until the receiver has read the message.

      Depending on what you're doing (e.g. a real time processing system), the "blocking" is not a problem. If it is, it simply means that the architecture is wrong, the implication being that there needs to be more receivers to share the workload. Actor model's asynchronicity sounds good, but really it just means that you disguise an inadequate architecture with latency...

      Another aspect in a complicated system is the potential for deadlock; circular dependencies and similar problems can easily be written into Actor and CSP systems. The difference between between actor and CSP is that an actor system may not deadlock until many years later when some network connections becomes a bit busy, whilst a CSP will guarantee to deadlock each and every time.

      Obviously if the system is doing a web-server like thing, then there's no need for any of this. In such a system, you just want to push messages out in the general direction of the client, and you don't actually care when / if / how it gets there.

      Danger of Missing an Opportunity

      As an old school programmer who grew up with parallel processing on Transputers in the early 1990s (= Communicating Sequential Processes), it's been jolly amusing to see the modern world rediscover stuff that was essentially all done back in the 1970s, 1980s. It's only gone and taken nearly 30 bloody years.

      The problem with this new async stuff in Python is that it's barely beginning to touch the surface of what has already been done elsewhere. This is a problem because no doubt a bunch of people will pick it up, a ton of stuff will get written, it will then not be changeable, and it ends up being a wasted opportunity for Sorting it Out Properly.

      This async stuff sounds like it'll be pretty lame in comparison to ZeroMQ. As a bare minimum they should look at ZeroMQ, pay attention to the different patterns it implements, and replicate them. Anything less than that is simply wasting future programmer's time.

      No Perfect Solution Yet

      I like ZeroMQ because of its clever patterns and shear just-gets-on-with-the-job no mucking about approach to joining threads / processes / machines together in a way that means you no longer really care about how it's done anymore. I like Rust and (probably) Go because they've actually gone and adopted Communicating Sequential Processes - a very good move.

      However, it's none of its still quite there. ZeroMQ bridges between threads / processes / machines and is admirably OS / language agnostic, but is Actor model, not CSP. Rust does CSP, but AFAIK the reach of a channel in Rust is stuck within the confines of the process; it won't go inter-process and it certainly won't go inter-machine. To me the ideal would be ZeroMQ, but one where the socket high water mark can be set to zero (which would then make it CSP.

      Generally speaking I actually use ZeroMQ, and something like Google Protocol Buffers on top (though I prefer ASN.1). That way you can freely mix different OSes, languages and hardware into a single system, whilst being able to deploy it on anything ranging from a single machine to a large cluster of machines. This level of heterogeneity is a fantastic when you're developing systems that you're not quite sure about what it'll end up looking like.

    2. bombastic bob Silver badge
      Devil

      Re: Async not always easy

      In programming lingos like Python, maybe, but I've been doing asynchronous things in C/C++ for decades.

      It's usually a matter of careful design, use of sync objects, etc. and, of course, background threads.

      However, having "all of that" in Python might be useful. Then again, it might be forcing Python to do things it shouldn't be used for...

      I ended up falling into a place where I have to update a DJango web site for a customer. Of course, so MANY things were so highly inefficient that I wrote a C utility to do the most time-consuming operations 30 times faster than before, and invoked it as an external utility from the existing Python code. Yes, I measured the performance difference. 30 times faster.

      The beauty of Python, though, is that it HAS those provisions built-in to invoke an external utility and return the stdio output as a string. I think Java painfully LACKS that kind of support, last I checked (I could be wrong, I'm not that familiar with Java). Anyway, it makes a LOT of sense.

      And that leads me to another point: perhaps it's a BAD idea to try and force Python to do things it shouldn't be used for in the FIRST place. Right?

      I've written my own customized web servers in C and C++ before, including a really small one that runs on an Arduino. So I think I can be a pretty good judge of "you're doing it wrong". Django is "doing it wrong".

      However, what Python seems to do REALLY well is allow you to quickly throw together a utility or a proof of concept application. Alongside shell scripts, Perl, and the occasional C language external utility, it's a nice addition to a computer that's used to "get things done".

      I'm not sure what threads and async I/O will actually do for anyone, in the long run. Maybe "nice to have" but if you're concerned about I/O performance, WRITE IT IN C OR C++.

      /me intentionally didn't mention C-pound (until now). The fact that I call it 'C-pound' is proof of why.

      1. david 12 Bronze badge

        Re: Async not always easy

        >Alongside shell scripts, Perl, and the occasional C language external utility, it's a nice addition to a computer that's used to "get things done".<

        And it would be even better at that if it had resumable exceptions.

        Resumable exceptions make speed optimisation more difficult (not impossible, but more difficult). On the other hand, resumable exceptions enable finely-grained exception handling for i/o-bound exception-prone multi-threaded asynchronous processes that spend most of their time waiting anyway.

        And once you've written your first application with a separate try/catch block for every single line you've learned why resumable exceptions are not universally a bad idea.

      2. John Smith 19 Gold badge
        Coat

        "it's a BAD idea to..force Python to do things it shouldn't be used for in the FIRST place. Right?"

        'Nokay

        A remarkably balanced and sane PoV. A lesson that should be taught on all CS courses.

        But IRL...

        You get people trying to write an OS in FORTRAN.

        And then the fun begins....*

        *I know it's stupid. You know it's stupid. But the Board spent a shed load on that new (cross) compiler so it's going to get used.

        Let the death march begin.

      3. foxyshadis

        Re: Async not always easy

        Aside from shelling out, Python also has fully-working dll/so support, with the ctypes library or one of its pretty wrappers, saving even more overhead versus spinning up an executable and parsing its stdout. Practically all of the important libraries have cpu-intensive operations in compiled .pyd (which is just a dll/so), and quite a few wrappers exist to call out to standard libs.

    3. Yes Me Silver badge
      Mushroom

      Re: Async not always easy

      "It's insanely difficult to get large multi-threaded programs correct," Hettinger explained. "For complex systems, async is much easier to get right than threads with locks." That strikes me as absurd. Threads, queues and locks are easy to get right. The model is clear and the pitfalls are well known to every CS student. There are other things in Python that are much more tricky (the semantics of 'global', the absence of a clear difference between call by name and call by value, and of course sloppy types are but three examples).

      Event loops are a cop-out compared to real multi-threading. Tkinter is a good example of how not to do things properly. I haven't looked at syncio, but anybody who thinks Twisted is better than Python threading is... twisted.

  7. Stevie Silver badge

    Bah!

    Cobol programmers were doing async decades ago.

    Get off my lawn.

    1. getHandle

      Re: Bah!

      You're a throw-back but have an upvote anyway. Currently overseeing guys doing it in C++... Feeling old!

    2. Ken Hagan Gold badge

      Re: Bah!

      PJ Plauger wrote an essay about 30 years ago in which he described the evolution of a programming model that is so standard that many readers of this forum might be unaware that there was ever any other.

      He noted that when operating systems first started being able to run multiple programs (Yes children, that was a thing once.) the OS designers naively offered pretty much all of the synchronisation primitives to user-space programers that they had themselves used to implement the operating system. This included stuff like async IO, signals, multiple threads of execution in a single address space, ... whatever.

      Very quickly they learned that their customers, the pleb user-space programmers, couldn't handle this. The solution was to create the abstraction of "your program owns the entire machine and there is only one thread in your program". The plebs could handle that and if the OS was cunning enough it could run several pleb programs and multi-task them against each other to maintain efficient resource usage.

      I can imagine that the abilities of some pleb programmers has gone up a little since then, but probably not enough to make it safe to encourage everyone to do everything async "just because they can".

      1. Mage Silver badge
        Unhappy

        Re: Bah!

        Did we abandon real computer science in the late 80s / early 90s in favour of "languages", "libraries" and "frameworks"?

        Have we abandoned compile time validation in favour of run time testing?

        Is discussing merits of Javascript vs python 2 vs python 3 missing the bigger picture?

        Async, threads, co-routines, mutex, signals, processes are all tools for design of systems with concurrency. Sometimes co-operative multitasking, pre-emptive or dataflow design is best approach. What works for user space on a desktop GUI may not be appropriate for a device driver or a web server with an SQL back end.

      2. Version 1.0 Silver badge
        Unhappy

        Re: Bah!

        I can imagine that the abilities of some pleb programmers has gone up a little since then

        It seems to have gone the other way - I see a lot of cases where programmers, having been taught to code in school by the age of 11, continue to code that way forever - but now they pretty print their code so that it looks professional.

        The end result is that we have lot's of "programmers" these days, but the proportion of them that are actually any good hasn't changed in years.

      3. fedoraman

        Re: Bah!

        Do you have a link, or the title of the essay? Curious minds want to know more

        1. Ken Hagan Gold badge

          Re: Bah!

          It is in one of three paperback collections of essays that originally appeared in some magazine, but I can't remember if it is in vol 1, 2 or 3. I certainly can't remember the title and it may or may not be online even now. I haven't re-read these for many years but I thought they were mostly good and occasionally excellent, so if you can pick up old copies on the cheap then I'd recommend it.

          If I can find the actual essay about concurrency then I'll post again.

          Sample amazon links for completeness:

          Vol 1

          Vol 2

          Vol 3

          1. Ken Hagan Gold badge

            Re: Bah!

            "If I can find the actual essay about concurrency then I'll post again."

            It's essay 14 ("Synchronisation") in the book "Programming on Purpose - Essays on software design" and the ISBN in my copy is 0-13-721374-3. It is based on two columns that he wrote for "Computer Language" magazine in November and December 1987, but the only thing that really dates it is a passing reference to "a social faux pas roughly equivalent to turning down a date with Brooke Shields".

      4. Stevie Silver badge

        Re: Bah!

        But Ken, back then programmers knew what they were doing. You know why?

        Because they learned how in the real world with burly chief programmers who would break fingers and take spleens in the event of fuckups and be patted on the back by the chief analyst when it all came out in public.

        I still have nightmares about the head of the punchroom. Most militant and violent woman I ever met and Azathoth protect you if you crossed her path when she was firing over open sights.

        The rise of Pleb Programmers is, I think, a far more modern, post BBC Model B era phenomenomnomnominon.

      5. Roo

        Re: Bah!

        The solution was to create the abstraction of "your program owns the entire machine and there is only one thread in your program".

        I think the microservices folks have rediscovered that one. :)

        The following documentary "UNIX: Making Computers Easier To Use -- AT&T Archives film from 1982" shows the reasoning behind UNIX guys pushing that exact same approach (pre-threads !).

        https://youtu.be/XvDZLjaCJuw

      6. Orv Silver badge

        Re: Bah!

        I can imagine that the abilities of some pleb programmers has gone up a little since then, but probably not enough to make it safe to encourage everyone to do everything async "just because they can".

        There are very accessible languages now that center around the concept of an async event loop -- Visual Basic and JavaScript are two examples. I think a lot of pleb programmers have at least a passing familiarity with the concepts involved now. I started out doing strictly procedural, single-threaded stuff, so the whole concept of things like callbacks was alien to me, but I picked it up fairly quickly. Scoping issues still sometimes confuse me, though. (I'm convinced you could keep any JavaScript programmer busy by giving them a moderately complex program with nested callbacks, then drawing an arrow inside a code block and asking, "what object does 'this' point to here?")

  8. maxregister

    >JavaScript has a better back-end story than Python has a front-end story right now

    That's obviously true because Python's frontend is virtually non-existant. html and Google & Apple's native languages are the ones that will hold the frontend spotlight for the foreseeable future.

    Frontend python will never be popular. That does not, in any way, indicate that server-side javascript will ever be popular.

    1. Graham Dawson

      With some trepidation, I must inform you that serverside javascript is already popular.

      I work there from time to time.

      Hence the trepidation.

      ETA: I can't tell if I'm being downvoted for shittalking server-side JS for for saying that it's popular. (It is. This is probably terrible. Don't shoot the messenger.)

      1. Charlie Clark Silver badge

        With some trepidation, I must inform you that serverside javascript is already popular.

        Ain't it just? And just waiting for all those gotchas we'd thought we'd ironed out of other "stacks" to come back with avengeance!

  9. John Smith 19 Gold badge
    Unhappy

    There's a reason Unix has this idea of "one program per job" and "pipes" to link them.

    In effect the OS is a part of the system that lets you build a "processor" (of "stuff") out of smaller, more easily debugged parts.

    Or for those used to the IBM iSeries "Readers" and "Writers"

    1. bombastic bob Silver badge
      Devil

      Re: There's a reason Unix has this idea of "one program per job" and "pipes" to link them.

      works for me. python wrapper calls C language utilities, passing and returning stuff via stdio.

      import subprocess

      rval = subprocess.check_output([some_program, arg1, arg2])

      let 'some_program' do all of the work, and just use python to sequence and control things. couple that with a nice UI (GTK?) and you have a rapidly generated application that's actually efficient. Other options exist, of course, like spawning oncurrent things, but I bet a single C application could manage THAT part, and return back success/fail info via stdout when everything's done.

      unfortunately you may need to put a 'try' block around that - I F'ing *HATE* that. Some "C-pound type" probably FELT that an application returning a non-zero *HAD* to be a FATAL DAMNED CATASTROPHE and "throw an exception" like that. Exception throwing is just IRRITATING as far as I'm concerned. But I'm a C coder, so there you go. And that's my point, to do the REAL work, code it in C (or C++ without the use of exceptions because they're lame).

    2. Anonymous Coward
      Anonymous Coward

      Re: There's a reason Unix has this idea of "one program per job" and "pipes" to link them.

      Yes, it was a model designed when very few 'programs' could fit into RAM so you needed to chain them while I/O was stored in the pipes while one program unloaded and the next one was loaded into RAM. It's an utterly outdated model, especially because error handling is a nightmare, and there's very little checks about the parameters (that's why they are easy attack targets).

      It just leads to fragile application, although it's still useful for some quick and dirty command line or script tasks.

      But keep on thinking what was necessary to cope with 48K of RAM is sound design still today...

      1. Roo
        Windows

        Re: There's a reason Unix has this idea of "one program per job" and "pipes" to link them.

        "Yes, it was a model designed when very few 'programs' could fit into RAM so you needed to chain them while I/O was stored in the pipes while one program unloaded and the next one was loaded into RAM."

        That wasn't the sole reason. One of the reasons is that they wanted something that was easy to understand and use on datasets that vastly exceed the capacity of a single address space & processor. The fact that the vast majority of HPC boxes out there run something akin to that model should tell you that a lot of people still find that model very valuable.

        "It's an utterly outdated model,"

        Well no, because HPC - and all the other stuff that doesn't fit into a single address space or a single machine. For example the microservices model is built on the same principle - lots of little bits of code talking to each other over some kind of pipe.

        " especially because error handling is a nightmare, and there's very little checks about the parameters (that's why they are easy attack targets)."

        That really depends on how you do it, but I do concede that handling errors across multiple threads (or processes) is generally a lot harder than working on a single thread/process.

        "But keep on thinking what was necessary to cope with 48K of RAM is sound design still today..."

        I'll let you think that as my place of work crunches through multi-petabyte datasets with their wimpy 256Gbyte/32 core boxes.

      2. Ken Hagan Gold badge

        Re: There's a reason Unix has this idea of "one program per job" and "pipes" to link them.

        "Yes, it was a model designed when very few 'programs' could fit into RAM so you needed to chain them while I/O was stored in the pipes while one program unloaded and the next one was loaded into RAM. "

        That may be how MS-DOS did it, because it had no multi-tasking, but it was never part of the model. Pipeline elements are easier to design, easier to test and easier to re-use than possibly any other concurrency primitive. If you have examples with messy error handling or inadequate parameter checking then that's just your examples.

        And as a final snark, 48K is about the size of a modern processors actual (as opposed to architected) register set, so a concurrency abstraction that lets you fit something useful into that space might just be the basis of the next generation of processors.

  10. kquick

    Actor model

    The Actor model (https://en.wikipedia.org/wiki/Actor_model) is another alternative for handling concurrency. One Python Actor Model implementation is at http://thespianpy.com/ and provides for the ability to abstract concurrency, scheduling, and communications methods out of the core logic instead of adding async specifications throughout.

  11. Anonymous Coward
    Anonymous Coward

    TWO PAGES ?!

    Commenting only to point out the idiocy of The Register splitting storied like into multiple pages this while posting stories about efficiency, web standards, user interfaces, etc.

    1. as2003

      Re: TWO PAGES ?!

      It's so you can load each page in a different tab and read them concurrently.

      1. bombastic bob Silver badge

        Re: TWO PAGES ?!

        I thought it was to double-up the potential advertising you're exposed to

  12. FelixReg

    python 3 is off track

    I love Python.

    But it doesn't run in the browser, despite some tries.

    And it doesn't run under Android, despite some tries.

    Concurrency can be a problem in Python, but that's true of pretty much all languages. The spiffs of Python 3 are tangential to it being usable in a browser and under Android. So they are irrelevant for Python's future prospects.

    Anyway, Python3 is simply another language than Python2. It's "easy" to translate Python2 in to various languages, including Python3. But doing so is work. Grunt work. Overhead. Friction.

  13. thames

    Python and Threading Explained

    Python has threads and has had them for a long time. For the most widely used versions of Python however, CPU bound threads don't automatically switch context between each other. Instead, they hold a global lock (the GIL) so that user code on different threads will not interfere with each other or with run time data structures. You can write multi-threaded code without worrying about putting locks everywhere.

    I/O bound threads automatically release the GIL during I/O calls, and 'C' extensions (may libraries are written in C) can optionally release the GIL if they have some long running computations to perform. However, that is all transparent to the user.

    What the most popular versions of Python don't do with threads is run CPU bound threads simultaneously in the same process. Instead, the recommended method is to use multiple processes which have no such limitations. There are several ways of doing this in the standard library. This is the preferred method of doing things in the web and HPC communities, which also happen to be the people who drive the development direction of CPython, which is by far the most popular implementation. The reason for this preference for multi-process is that they tend to think of "scaling" as being in terms of multiple racks, not in terms of a handful of cores in a single box.

    Asynchronous I/O is a way of handling I/O bound applications while consuming less memory than using multiple threads. The typical use of this is in web applications, where applications can end up being memory bound rather than CPU bound. These are also applications which are typically scaled across multiple boxes or racks for greater scalability and redundancy, so they also tend to be designed around multiple processes rather than multiple threads.

    Python has had ansyc style programming in the standard library for a long time. The asnyccore library dates back to some time in the 1990s. Twisted is asnyccore on steroids. Node.js is inspired directly by Twisted. The original developer said in an interview that he wanted to combine Twisted with a JIT, and so created Node based on Chrome's Javascript VM (the Pypy Python JIT was not available at that time).

    There arose various asychronous third party Python libraries, most being incompatible with each other. The head of Python development (Van Rossem) created the new "Asyncio" system in an attempt to unify them so they could share code libraries and work together better. The idea is that they would replace their own lower event loops with ones provided by "Ansycio" while maintaining their upper levels (the part the application programmer sees) for compatibility. This seems to have worked quite well from a technical perspective.

    If you want Java style threads though, then use Jython or Ironpython. Most people however don't see much advantage to it and so don't use those versions. Pypy (Python with a JIT) was working on Software Transactional Memory (STM) as yet another alternative approach to the problem. However, they recently abandoned the effort as they didn't see a way to get acceptable performance out of it on today's hardware.

    Some years ago someone did produce a version of CPython (the most popular implementation) with the GIL removed. However, that had such a negative effect on performance that it too was abandoned. Multi-threading without a GIL has a lot of overhead on modern hardware whether you actually use the feature or not, and people running single threaded applications simply didn't want to pay the performance penalty for the benefit of the few people who wanted the feature. It can come back, but it is up to the people who want it to produce an implementation which doesn't have the sort of negative side effects on single threaded performance and memory consumption that seems inherent to it.

    As for Python 2/3 changes, as of somewhere around Python 3.5 or so, version 3 of Python has acquired the features which make is relatively easy to port difficult Python 2 applications to the newer version. Major frameworks such as Django have dropped or are dropping Python 2 support.

    The main problems people have encountered in the 2 to 3 transition were related to existing user application unicode bugs being exposed now that strings were given separate unicode text string and binary data array types (both were combined in one type in version 2, and you never knew what you would get out of some functions). Another was integers being unified into one large integer type instead of two different small and large integers. Now an integer is an integer and you don't have to deal with version 2 bugs caused by integer overflow changing the data type on you. I've been burned by both problems with version 2, and am much happier using version 3.

    The main focus of the 2 to 3 change was on making it unicode compatible from top to bottom by default rather as something the programmer had to add in himself. That was never going to be painless since too many user applications were written based on the assumption that everyone in the world speaks English.

    At present, mainstream Python development is focused on performance improvements, reductions in memory consumption, and reducing start up times. That will be the focus for the next few years, and major improvements have been seen in the version currently under development.

    1. bombastic bob Silver badge
      Stop

      Re: Python and Threading Explained

      a bit long, but I think you missed an important point: Python is not a lingo for which to implement the KINDS of things that typically require multiple threads, for efficiency and concurrency, etc.

      Already mentioned, but I'll mention it again: I used a C language utility to replace inefficient python code for a Django web server, and increased the speed of that task by a factor of 30. That's THIRTY. Yeah. I invoked it from python using the 'subprocess' object, returning the stdout as a string. That output was then passed along to other things. It made it possible to do an upload + data conversion in a few seconds, rather than OVER 2 MINUTES [which was timing out the apache proxies, and irritating people]. So the web page display (showing the data results) comes back in a reasonable amount of time, now.

      It makes the point that Python is NOT well suited to a lot of things, from numerical calculations, to parsing a binary file and generating CSV and XML data. Because that is what the C language utility does.

      So if you have to deal with a 'GIL' aka "giant lock" (another term for the same kind of thing) that blocks EVERYTHING like that, it completely misses the boat on performance.

      Python has its uses, but forcing it to act like C or C++ isn't it. If you want performance, use C or C++. If you want convenience, or need a wrapper around your C/C++ utilities, Python will do nicely.

      1. thames

        Re: Python and Threading Explained

        @bombastic bob - Recommended practice for Python programming is that when you have a function that would be better programmed in C, then use C. Unlike some other systems, Python doesn't try to be a one-size fits all language. There's loads of Python libraries which are written in C or Fortran and called from Python.

        I've written a C language library for Python that is up to 500 times faster than than the Python equivalent for byte data types, but to a large extent that's because it is using SIMD instructions (via compiler intrinsics - essentially embedded assembly language) to get parallelism at the instruction level whereas Python integers are "large" integers (not native integers, and no upper limit on size). Each has its pros and cons, and you need to make the trade-offs of performance versus manhours invested in writing the software according to the customer's needs.

        If you're not familiar with Cython, you might want to have a look at it. With Cython you add annotations to Python code and the Cython translator compiles it to 'C', and then your C compiler compiles it to native code which you can call directly from a normal Python program. There are only certain types of problems where it offers any advantages (since writing it in C isn't necessarily much, if any, faster), but in those specific applications where it does, it can be as fast as native C, since of course it is actually C.

        I wrote the wrapper for library that I mentioned above by hand. It's not hard to do that if you've done it at least once, but it's also not well documented for beginners. The recommended method is through the CFFI library, but that has a bit more overhead than a hand-rolled wrapper and I was looking for minimum latency.

        Generally though, if you have a problem which would be solved by re-writing it in C, have a look for open source libraries. If it is a common problem, chances are someone has already written a Python interface to it.

        As for your application with 30 times faster, that's a good example of something where parallel threads in Python would have been no real help. You would have needed at least 50 CPUs (accounting for overhead) to equal your solution. Doing as you did, or converting using Cython is in fact the more accepted sort of solution in the Python community.

        There's also Pypy, which is probably the second most popular implementation of Python. It uses a JIT compiler (like a lot of languages do these days) and so can speed up certain types of problems automatically. However, like all JIT compiled languages, JIT comes with a penalty in memory consumption and garbage collection pauses which many people don't want to pay.

        In addition, there are third party JIT compiler systems which are add-ons to the standard CPython implementation and which act only on those functions you select, rather than on the whole program.

        There are loads and loads of different performance improvement solutions to most problems which can happen in Python, which is why it simply isn't a big deal to most people who actually work with it enough to know about them.

        As for where you said: "It makes the point that Python is NOT well suited to a lot of things, from numerical calculations, to parsing a binary file and generating CSV and XML data. Because that is what the C language utility does."

        Have a look at the Python standard library, the examples you gave are written in C and simply called from Python as library modules. That's the standard library!

        Large scale numerical work is typically done with Numpy, which in turn is written in a combination of C and Fortran. Numpy is not part of the standard library, but is an open source library which is considered to be the "standard" way of dealing with numerical problems in Python.

        1. bombastic bob Silver badge
          Devil

          Re: Python and Threading Explained

          "if you have a problem which would be solved by re-writing it in C, have a look for open source libraries"

          it's been my experience that, given all of the 3rd party libraries I've either been asked to search for, or been handed with the request to use it, that in the amount of time needed to twist things to fit, and learn their often ridiculous (and inadequate) API, that I'm better off just pounding out the code myself, because I'm just "that good" and have been doing this for so long it's just faster, better, and less likely to have problems.

          Once I was handed a 3rd party graphics lib, because it generated 3D charts. I struggled with their ridiculous API. Then I got fed up and said "look I've wasted too much time already. I could write something better/faster in C++ and it would look better and NOT have a license fee attached. I think I spent less than a week on it, and the 3D charts looked like real 3D, and the side-by-side was almost embarassing for the 3rd party lib.

          Anyway, that's just one example. I see plot libraries, math libraries, supposedly difficult calculations that are really trivial examples of algebra and loops, spatial stuff, and none of it is all that complicated. And if I spend 10 hours looking for a proper library, and another 10 to 20 hours evaluating it, by then I'd have written the proper solution already.

          And then there was this one time that "they" wanted to use opencv to display a camera in real-time while doing analysis on it. well, opencv just had to have 1 second of buffering, and that was inadequate for real-time. So I used gstreamer to grab the camera data live, then converted it to a bitmap and did the analysis on it directly, proving the concept and avoiding the monolithic library. Not only that, but I was able to use "red only" (this was required for infrared actually) and generate a monochrome image from it, do the analysis on the image, and track an object based on its shape, frame by frame. without opencv.

          Anyway, I can think of more examples *like* that but that's what I've experienced with "3rd party libs". The people who write them aren't smarter than me, but they might have more time. And they're not panacea solutions, and for most things, I'll just write it myself. [but for an entire OS I think I'll use a 3rd party OS like Linux or FreeBSD, heh]

      2. AdamWill

        Re: Python and Threading Explained

        So apart from thames' entirely correct point, there's another thing you're missing, here. It's the old point about optimizing the wrong thing, basically.

        Here's a real world example: I had a Python app which didn't do anything terribly complex on the local system...and then, as part of its work, had to run three separate queries to a very slow-responding network server. Like, each query would take 30-45 seconds to run.

        So here's the basic performance profile of the app: everything that actually happens locally, as part of the app's code, took, oh, about a half a second to run. And then those three network queries took ~2 minutes. On each run.

        Conclusion's pretty obvious, right? To the overall performance of the app it doesn't matter a jot whether all the code that executes locally is written in Python, Perl, C or frickin' COBOL. If rewriting all that Python in C would make it thirty times faster, that would save...0.48 seconds, from a total execution time of...2 minutes 0.5 seconds. Woop woop.

        *However*, if I could just get it to run the three network queries concurrently, that would save about 80 seconds. (Which, of course, is what I did).

        So, yes, Python is not a particularly fast language. But that doesn't mean there's no reason why you might want to use some kind of concurrency in a Python codebase.

  14. Wil Palen

    Is the GIL gone yet?

    No? Wake me up when it is.

    1. thames

      Re: Is the GIL gone yet?

      It was gone at least 10 years ago. Just use a version that doesn't have a GIL if that's what you want.

  15. CheesyTheClown

    Multithreaded programming is easy, Multithreaded coding is not.

    A person with a sound understanding of multi threading and parallel processing should have absolutely no problem planning and implementing large scale multithreaded systems. In fact, while async programming is super-simple, it has many caveats which can be far more complicated to resolve than multithreaded code.

    That said, if one is building a database web-app using asynchronous coding is perfect. It's absolutely optimal for coders without a proper education in computer science.

    Of course, async patterns can fail absolutely when there is more to application state than single operation procedures. Locking becomes critical when two asynchronous operations are altering state that impact one another. At this point, we are left with the same problems as when threading is used. The good news of course is that the async paradigm generally offers additional utilities to assist with these specific scenarios.

    I use the async paradigm often as it offers a poor mans solution to threading which can be quick and easy to maintain.

    Back in 1991 (or so), Dr. Dobbs presented a nice approach to handling concurrency that more people should read. It's a crying shame they didn't just open source all articles when they shut down.

    1. Down not across Silver badge
      Thumb Up

      Re: Multithreaded programming is easy, Multithreaded coding is not.

      Back in 1991 (or so), Dr. Dobbs presented a nice approach to handling concurrency that more people should read. It's a crying shame they didn't just open source all articles when they shut down.

      I'll second that. I grew up with Dr. Dobbs Journal. I remeber lounging around with stacks of Journals, coffee and full ashtrays. It had some great columns and articles.

    2. Roo

      Re: Multithreaded programming is easy, Multithreaded coding is not.

      "In fact, while async programming is super-simple, it has many caveats which can be far more complicated to resolve than multithreaded code."

      The problem I most often see is that folks haven't grasped the underlying theory of the parallel programming methods they employ - so they don't have sufficient information to 1) select the best option and 2) use it.

      Case in point I'm seeing a bit of Futures creeping into common usage in Java, which combines pros and cons of Threading with the pros and cons of async... You thought 'goto spaghetti' was bad ? Try futures spaghetti not only is it tough to follow but it's also adds unpredictability to the mix. It gets really fun when some code throws an unexpected uncaught exception a couple of weeks after the guy wrote it - when said dude says "This is too complicated". :)

      I think futures have their place - a dark corner under a dusty blanket with a "Bio-hazard" sticker on it :)

  16. disgustedoftunbridgewells Silver badge

    Read as:

    We can't fix the GIL, so lets pretend event driven programming is better.

  17. James Anderson

    Nearly as good as Perl

    PERL has been doing this async type stuff since PERL 5 (and a lot of it before).

    Its nice to know that Python has got there eventually, now it could just do it a little faster in a little less memory it could become a language useful for things other than prototyping an POCs.

    1. thames

      Re: Nearly as good as Perl

      Python has had async for about 20 years. All that has happened recently (that is a few years ago) is that a new version of it has been created to bridge compatibility between most of the popular third party async frameworks such as Gevent and Tornado so that other third parties can more easily write libraries that work with all of them.

      Async programming has become popular with web applications because of changes in web programming such as AJAX and Web Sockets, where clients have been holding open connections for longer. If you try to handle this server side with conventional threads (in any language), you consume a lot of memory. Async accomplishes the same thing with far less memory consumption. The level of interest in this is simply a reflection of the current level of interest in web and mobile applications.

  18. Wiltshire
    Linux

    I'd like to tell you about the scaled-out racks of Raspberry Pi 3's we've built. Running web servers written only in Python 3. Generating async dynamic server-side code, with not even a single line of Javascript. But I've signed a Non-Disclosure Agreement. So I better not tell you.

  19. Paddy
    Megaphone

    Shock Jock?

    You quote Zed Shaw, but at the time I thought he was being alarmist to become popular in the Python world. He had something to peddle to a new Python audience.

    1. thames

      Re: Shock Jock?

      Yes, Zed Shaw specialises in saying "controversial" things in order to attract attention to whatever book or course he's flogging this month. I would take any statements from him with a large grain of salt.

      As the story notes, when he had a Python 2 book out, he bashed on Python 3 as being "doomed". Now that he has a new book out which focuses on Python 3, 3 is fabulous.

  20. Nimby
    Facepalm

    Not getting it. Maybe in 4...

    I've been a software engineer for a long time. Maybe too long. Python is still one of my favorite languages. But favorite and most usable are two different things. Async and threading are two different things. Multi-threading and multi-processing are two different things. Backward compatibility and future-proofing are two different things. Linux and Windows (Python community is **** at testing on Windows, especially with MS compilers) are two different things. Server-side and GUI application are two different things.

    And on and on. Python's problem is that ever since it split between 2 and 3 it is no longer about "all of the above". You no longer get everything but the kitchen sink AND the kitchen sink. Python is no longer a language where you can have your cake and eat it too. Python has become about choosing what you need and balancing the flaws of that choice against the benefits. And when you have to start doing that, it becomes easy to look at other languages that, frankly, run faster.

    And the reasons are awful! The vast majority of the many divides in Python today are not even there for good technical reasons, but just bad attitude. It's that bad attitude that is damaging Python. There is no only one right way. There are many ways. There always have been. Python needs to go back to that. Together it stands. Divided...

  21. rmullen0

    I'll stick with C#

    Welcome to where C# was years ago. I'll stick with a statically typed language, thank you. Does Python have an equivalent of C#'s await keyword that allows you to program asynchronously the same you do synchronously?

    1. bombastic bob Silver badge
      FAIL

      Re: I'll stick with C#

      "Does Python have an equivalent of C#'s await keyword that allows you to program asynchronously the same you do synchronously?"

      probably not. but just because it's in C-pound does NOT make it great.

      C-pound is a java-like wrapper around Micro-shaft's *HIDEOUS* ".Not" architecture. Period. They couldn't embrace, extend, and extinguish Java, so they made their own. good for them. It's hard to see it *EVAR* get above 6 percent on the TIOBE index, even after all these years of Micro-shaft shoving it at developers like it's a panacea language.

      Python, on the other hand, is TRULY platform independent. Use Python with GTK and you can make platform independent GUI applications. And don't bother with the "mono" and ".Not WHORE" nonsense. those are just LAME.

      /me screamed when gnome added Tomboy and forced the 'mono' crap into gnome desktop as a huge monolithic pile of dependencies. Fortunately it went away.

      Python makes an excellent wrapper around programs (and, granted, python extension modules) written in a proper compiled language like C or C++. Whereas, C-pound is just another Micro-shaft hack, for people who've drunk their coolaid and become addicted to it.

      no thanks to C-pound.

      1. rmullen0

        Re: I'll stick with C#

        Python has been around since 1991. Unfortunately for you Python even after all that time is behind C# on the TIOBE index. https://www.tiobe.com/tiobe-index/ No thanks to weak sauce dynamic languages like Python. If I wanted that, I would go with JavaScript, another lame piece of garbage.

  22. jacques_de_hooge

    Python with async/await in browser

    You can now also use Python 3.6 including async/await in your browser.

    www.transcrypt.org

  23. P.B. Lecavalier

    python and JS, apples and oranges

    I don't understand that alleged competition between python and JavaScript. One has been designed as a general purpose language, the other is a web development creature. Whenever I hear about someone using JS outside of web development, that just sounds very, very wrong. Forget about perl, ruby and every other scripting language that have been developed. Let's just expand JS... because!

    One thing the author of the article pointed quite correctly: the packaging system in python. It's a good one I believe, it works, but it is so complicated to penetrate for the simplest package! You got "source distributions" and "built distributions", you got "eggs" and "wheels"... And when you try to figure out what any one of those is and what you should use, what you face is one heap of techno babble. The book on python project development and packaging has yet to be written, probably because if it was written, it would amount to say "It's all one big work in progress".

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019