Re: Poor instructions @Dave 126.
I've only looked at UK pressings under UK light, so they are for 50Hz.
2924 posts • joined 15 Jun 2007
If you look at 45's during the era of auto-changers, you would see that many of them had a circular 'bump' track between the innermost grove and the label. This was there so that when they were stacked, they would 'lock' together, preventing the upper ones from slipping while being rotated through a stack of lower disks.
What was more interesting is that the number of the 'bumps' was such that when viewed under a bright mains filament light while spinning on the turntable, they should appear static (strobe effect) if the turntable was running at the right speed, but you had to look very hard.
I have a copy of Tommy by the Who, which was a two LP set, which had sides 1 and 4 on one disk, and 2 and 3 on the other. This was so that you could play sides 1 and 2 on an auto-changer, and then turn both disks over together as a sandwich to play sides 3 and 4.
Mind you, the weight of the records falling down the spindle, especially the heavier vinyl used in the '60s and '70s was such that I was always surprised that the turntable survived. I suspect that is why the BSR decks (at least) has spring suspension to absorb the impact, not for any audio isolation. My Grandmother also used to use the auto-changer on her PYE Stereogram (about the same size as a small sideboard) for shellac 78s which were really heavy.
We had infrastructure that had small numbers of large system that controlled their own resources, be it memory, CPU, storage, or networking, with software components optimizing the use of resource. It ran on hardware that had enhanced RAS capabilities, and was quit expensive. Call this Stage 1.
Since then, we've been through:
Stage 2. Multiple smaller systems, each controlling their own resources, but they were cheaper.
Stage 3. Rolling all storage for these multiple systems into centralized storage solutions to make storage more flexible
Stage 4. De-duplicating the storage systems, so that the multiple OS files (and really only these files) would not have multiple copies wastefully stored
Stage 5. Virtualising all these multiple systems onto larger servers 'to save money and reduce wasted CPU and memory through resource sharing, and putting it on expensive systems with enhanced RAS.
Stage 6. Replacing the SAN with software defined storage systems.
Stage 7. Moving your communication infrastructure into the virtualised environment.
Stage 8. Virtualising the software defined storage systems into the enhanced RAS systems
So where are we.
We will now have infrastructure that has small numbers of large system that control their own resources, be it memory, CPU, storage, and networking, with software components optimizing the use of resource. It runs on hardware that has enhanced RAS capabilities, and is quit expensive.
All we appear to have done is replaced the OS with a hypervisor, moving everything one rung up the ladder, and we now have the traditional OS fulfilling the same function as the application runtime environments.
The next step will be to replace the traditional OS with a minimal runtime (hmmm, is that what containerization is all about), and we will have reinvented the Mainframe!
I've added the joke icon to try to deflect all of those of you who will try to point out the difference in detail between mainframes and hyperconverged systems.
He had form before systemd, in that he was responsible for the cluster-fuck that was Pulse Audio, which was only really fixed after he moved on, I understand.
That was also an over-arching package that tried to control everything audio wise. He tried to offset blame to the distro maintainers (particularly for Ubuntu), but I struggled with it for many years (main problem being resampling rates and buffer-underruns after suspend on IBM Thinkpads, leading to gaps in the audio) before it suddenly just worked after an update.
Back in the tail end of the noughties, I think that one issue pushed more curious people away from Linux than anything else!
I don't doubt your longevity with UNIX. It's definitely longer than me (6th Edition, 1978), but I seriously doubt that you were using 7th Edition in 1975 (although I believe PDP-11/45 in this time scale).
Most of the V7 documentation is dated 1978, and the Levenez timeline dated 7th Edition to 1979, so unless you were working in Bell Labs, I suspect that you were using 5th or 6th Edition in 1975.
Sorry to nitpick.
I must admit it is the use of XML and the severe scope of systemd that I don't like, as to me, it makes the startup of Linux pretty opaque.
Not just nuclear. All bio and fossil fuels are carbon stores (a carbon battery?) that can be re-charged over years and megania (is this a word? A thousand millennia? It should be!) respectively from an outside power source (the sun).
Unfortunately, all you're really doing is moving energy around (you never 'generate' energy - merely convert it from one form to another, including matter - E=mc2), and will continue doing this until the heat-death of the universe!
Or alternatively, your meter is reaching it's end-of-life (they are only certified to be accurate for a fixed period of time) and needs to be replaced anyway.
If this is the case, and it were me, I'd want to make sure a direct equivalent of the existing meter is installed, not a smart-meter.
The process was under the control of the HMC (Hardware Management Console). It would create the file, execute it, and delete it, and if anything failed, the entire process failed.
I'm no stranger to doing exactly as you suggest (even using hex editors to hand-hack binaries) to move files in awkward locations to better ones, but in this case, there was no point where I could break into the process to alter the location it was trying to use.
I even had a jail-broken HMC, and worked through how the process worked. It was using a script on the read-only filesystem (so immutable, even by changing the file on the server serving it - there was some strangeness in the NFS implementation where changes on the server were not picked up on the client, something to do with it being read-only mount and having NFS caching enabled), so while I could reboot the server to pick up the changes, that negated being able to hot-swap the PCIe cards.
We did the work. I just wanted to have the process fixed, because I have what sometimes appears a perverse desire to see defects fixed, rather than working around them (especially as I had already worked through the issue, and could point to exactly where the defect was).
Must be something to do with me having worked in Level 2 AIX support for a number of years. I really don't like having to tell people who are supposed to be providing support to me how to do their job.
I'm a really awkward customer!
The problem with PCs in general is that if you use the old DOS MBR partition system, you can only have 4 primary partitions, and everything else has to be in an extended partition in one of the primary partitions. This generally meant that Linux was installed in a single partition, as in a dual-boot system you could not guarantee that there was more than one partition available for filesystems.
On my laptop, I used to have a rarely used Windows 7 (32 bit) primary partition, two Ubuntu OS primary partitions (one my current use release, and the other either a previous or the next version of Ubuntu depending on where I was in evaluating the LTS releases), and an extended partition containing a /home filesystem and the swap space (plus any partition backups I wanted to keep).
When I got my latest 2nd user Thinkpad, I found that Windows used two primary partitions, adding a boot partition. I dropped one of the Ubuntu OS partitions, although I did reserve the space in the extended partition for it for future use.
I really need to think about migrating to Xenial Xerus, but I'm not 100% sure I can install Ubuntu in a secondary partition. Maybe I should just bite the bullet and do a dist-upgrade, but I am not comfortable clobbering my current daily use OS with no fallback.
Presumably, my next laptop will probably have a GPT, but that's no reason to replace my perfectly functional system.
I think it was a matter of convention and knowledge. The install docs (V7 here, nroff source) for Bell Labs UNIX did not give very many hints about how to do it, and if all you did was to follow the docs with a single disk system, you would end up with a layout that probably left you with nowhere other than /usr to store user files (Sorry, I did have links to PDF formatted documents from the Lucent UNIX archive, but that appears to have disappeared - still, "groff -ms -T ascii filename" will make a reasonable attempt to format these for the screen).
On the first UNIX system I logged into in 1978 at Durham University, there was a separate /user filesystem which mapped to a complete RK05 disk pack (about 2.5MB per pack). / and /usr (and the swap partition) were on disk partitions on a separate RK05 disk pack. At this time in V6 and V7, disk partitions were compiled in to the disk driver (in the source), and IIRC, the default RK05 split was something like 25%, 60% and 15% for root, usr and swap.
Whilst I was there, the system admins. (mostly postgrad students) added a Plessey fixed disk that appeared as four RK05 disk packs, and allowed them to give ingres it's own filesystem. This happened at the same time that V7 was installed on the system, over Summer vacation in 1979.
When I installed my first UNIX system (1982, again V6 and later V7 UNIX), I kept a similar convention. although I had two 32MB CDC SMD disks to play with, configured as odd sized RP03 disks, and I split each of the disks up as either four quarter disks, 2 half disks or one complete disk - don't use overlapping disk partitions! (again in the device driver source of V7 UNIX). It was a very involved process getting UNIX onto these non-standard geometry disks, but that's a tale for another day.
During this time, I also had access to an Ultrix system which user /u01, /uo2 etc (BSD convention).
When I worked at AT&T (1986-1989), they also used the /u01, /u02... convention for user filesystems.
Following that, I've always had a /home filesystem for user files.
I've been working with UNIX for 38 years (Bell Labs V6 onwards), and while I don't disagree with you, /usr has never been used for user files in my experience in all that time.I think I read in one of the histories of UNIX that it might have been used like this on the earliest PDP 8 releases (before my time), before they moved to PDP 11.
What was common was to actually have a /user filesystem in addition to /usr, although a convention adopted from BSD I think often had /u01, /u02 etc for user files.
IIRC, Sun introduced the concept of /home.
Sun introduced a filesystem layout back in the 80's with SunOS 2 (I think), where /usr was a largely imutable filesystem.
What this allowed was the /usr filesystem of a system serving diskless clients to share it's own /usr filesystem with the clients.
If anybody cares to remember, the diskless client model meant that Sun 2, 3 and 4 workstations could just be CPU, memory, display and network, with no local persistent storage. Back when SCSI disks were very expensive, this allowed you to centralise the cost in a large server, and keep the cost of the workstations down.
The model was that all filesystems were mounted over NFS, with /, and /var (a new filesystem in this model) mounted (IIRC - myy memory could be faulty and confused by the differences between the Sun and IBM models) from /export/root/clientname and /export/var/clientname on the server as read-write filesystems, and /usr, (and later /usr/share) mounted read-only, served either from the /usr and /usr/share if the clients ran the same architecture and OS level, or from some other location which mirrored /usr if the clients ran a different version (this allowed SPARC architecture systems to be served from Motorola ones, or vice-versa).
Directories such as /etc, /var/adm, /usr/spool, /usr/tmp, which would have been on read-only or read-mostly became symlinks into /var (which was unique to each client as it was mounted from a different directory on the server).
Other vendors including IBM and Digital adopted very similar layouts for clusters of diskless clients. With IBM in 1991, it appeared with AIX 3.2 (and refined in 3.2.5). The filesystem layout meant that no machine should really write into /usr except during an upgrade, containing any variable files into /var. Unfortunately, many people (including IBM software developers) forgot this, and over the years, software expected to be able to write into directories below /usr.
Interestingly, the IBM 9125-F2C, aka Power7 775, supercomputer running AIX reintroduced the concept of diskless clients in 2011. The filesystem layout was modified slightly, with the concept of a statefull read-only NFS filesystem (STNFS), which allowed changes to the read-only filesystem to be either cached in memory for the duration of the OS run (a bit like a filesystem Union), or files/directories to be point-to-point mounted over entities on the read-only filesystem into a read-write filesystem.
/ became a STNFS read only mount, /usr was a read-only filesystem, and /var was a read-write mount off of an NFS server. /tmp was left on the / filesystem, meaning files were lost on a reboot, and also that writing lots of files into /tmp reduced the amount of RAM the node had!
Work related filesystems were mounted over GPFS for performance (NFS was just too slow), although any paging did actually work over NFS (obviously, paging was a major no-no for these performance optimised machines, but we could not get AIX to run without a paging space).
Unfortunately, as I found out, the hot-swap process for adapters, run over RMC from the HMC (Hardware Management Console) had a habit of trying to construct scripts in /usr/adm/ras (on the read-only part of the file tree) to execute to enable the swap, and as a result, we were unable to hot-swap adapters, which caused problems on more than one occasion. I did raise a PMR with support/development, but had trouble arguing the problem through, as the systems were so niche, that the support droids could not understand the problem.
In my view, a server is a real server, a network switch is a real network switch, and a storage subsystem is a real storage subsystem. That's simple (even more so if the storage is local to the server as SDS systems appear to be moving back to).
You get to think about them one at a time, and to scale, you just buy a bigger one of whatever has run out of steam!
I appreciate that the hardware landscape is simple with hyper-converged systems, but the software installation is not! (and I speak as someone who has used LPARd systems with hypervisors and visualized networks for over 10 years).
I've often thought one of the real reasons why it's caught on is because it allows the PC vendors to sell ever larger, higher margin systems (rather than cheaper, smaller individual systems) on the promise of overall reduced overall costs or energy consumption. I would love someone to publish a real world study that actually measures these savings.
You also get to suffer the problem of taking a large part of your infrastructure out of service, because you've got to replace a memory DIMM, processor, or other significant part of your hyperconverged system that is running everything.
Oh, wait. You need to invest in workload mobility products to overcome that problem!
Please think about what you said for a second or two. You've been around here long enough to know the problem with what you've said.
Port 7547 is not a reserved port, and is in the ephemeral port range, so it is not beyond the bounds of possibility that it could legitimately be used by some other piece of software.
Just blocking it could have unpredictable effects.
IP Spoofing. Not really applicable.
There are two ways IP spoofing can have an effect. One is only possible if you are on the same physical network and subnet as the system you're trying to attack, and the other is if you are not trying to open a bi-directional session (normally only if you are attempting a DDoS packet flood or reflection attack, where you don't need any return packets).
In theory, I suppose it could work if you were physically on the same network as the system you're masquerading as, and could knock the management server off the net, or subvert the ARP cache on the router, but if a hacker has physical access to your ISPs infrastructure, then you're probably screwed anyway!
Anything else uses the source IP address in any packet as the destination for return packets, so they get routed to the systems you're masquerading as, not you (this is the reason it works in the same subnet, because there's no routing involved). So you never see any return packets, and thus cannot set up any TCP service as the initial handshake won't work.
One last thought. You could try source-routing the packets, but most routers don't allow this anymore.
You were lucky to have a punching service. I had to punch my own cards in my first job!
Still had to use coding forms, because there was one punch machine shared between four programmers and two Systems Analysts (whatever happened to that job role?), and we weren't allowed to write the program while we were at the card punch.
One of my (now grown up) kids says that the reason why Minecraft makes *any* computer crawl is because it does not use the GPU efficiently (or even much beyond a basic frame buffer if I understood what he was saying), and uses the CPU to render into a pixmap.
He once had an interesting hobby of capturing the most extreme way of making it grind to a halt, and then posting the videos on YouTube.
It seems a little lacking in foresight just leaving the surface as untreated cardboard .
Why don't they at least put a shiny, moderately coffee proof surface on it (especially as a result of the 'drawback').
We need a long term test! I think Dabbsy should actually use it, and report monthly on how it is faring with regard to cup rings, grease stains, wear from mice (strangely missing from his desk) and keyboard legs.
Oh, and the use of it to support virgins, if only one at a time.
I recently spent some time learning how modern transformer-less switch-mode power supplies actually operate (thanks BOLTR on youtube), and I've changed my mind about how many of the capacitors are on 24x7 in the power supply.
It is quite clear that there are some that don't, but most power supplies in devices with standby mode nowadays appear to use a basic bridge rectifier and some high quality capacitors, and then feed the barely smoothed 120/240V DC into switched MosFets and smoothing capacitors/voltage regulators to act as voltage converters. The result is that when the device is in standby, most of the caps on the LT side of the MosFets are actually not powered up at all.
Of course, the switching control circuits are powered all the time, as are the first stage smoothing capacitors on the HT side, but this somewhat reduces the need for over-spec'd devices that will run for 100,000s of hours.
I'm not denying that capacitors fail, but I wonder what the statistical variance is on the MTBF figures for the cheapest Chinese capacitors actually is. I suspect this is more likely to cause early failures rather than devices that get close to the MTBF.
Some time ago, I made a similar point, but was told by someone on the forums that the MTBF is based on the device being used at it's maximum voltage and temperature rating.
I was told that if you over-specify the capacitor, for example use ones rated at 105 Centigrade and 1000V for a instance that was room temperature plus and 230V, the MTBF would be exceeded many times.
Of course, what that probably means is that the original designers over-specify the devices, and during the production planning, devices only just exceeding the typical operating environment would be substituted as a cost saving measure.
I'm not really fussed, as it means that broken things with simple fixes can be bought and repaired for my own use, quite cheaply.
I often wonder just how many of the multitude of flat panel TV's that appear in our local recycling center are an easy fix.
Interestingly, I recently had to get a renewal of my passport at very short notice, so I arranged a visit to the closest passport office.
When I was being interviewed (part of the quick application process), I commented about taking glasses off for the photograph, and the interviewer said that it is acceptable to wear glasses in the photograph, as long as the eyes could be clearly seen through the lenses (i.e. no dark glasses, small or half-frames that obscured the eyes themselves, or heavy reflections off the lenses).
I went back and read the passport application, and indeed, this is what it says.
But I'm sure that the jobs-worth post-office counter people who do the pre-check would not accept a photo with glasses, though. Last time I used the post-office passport checking service for one of my children, it took me three attempts to get photographs they would accept, and that was without glasses,
It's the only choice. The Red Arrows (and before them the Blue Diamonds) have always flown the RAF fast jet trainer, from the Hawker Hunter, Folland Gnat and the Hawk T1. It's done because of the lower cost and essential good handling (both necessary for a trainer), and because the Red Arrows are a part of the Central Flying School.
Um. We haven't had any battleships since 1960. The article quoted is about destroyers, although these are the largest combat ship in the RN until the Queen Elizabeth is commissioned. (Please note, HMS Ocean, Albion and Bulwark are not really combat ships, even though Ocean is the Fleet Flagship).
If you had said "warship" rather than "battleship", you might have been correct.
That's a very interesting point, one I had not thought about, but the term CISC actually refers to a Complex Instruction Set Computer, and is defined by the number of instructions in the set, and the number of addressing modes that the instructions can use. I would say that the memory bandwidth savings were secondary, especially as most early computers processor and memory were synchronous.
I'm not sure that I totally agree with the definition of a PDP11 as a CISC (although it was certainly several generations before RISC was adopted), but the instruction set was quite small, and the number of addressing modes was not massive and exceptionally orthogonal, so it does not really fit in to the large instruction set many addressing modes definition of a CISC processor.
What made the PDP11 instruction set so small was the fact that the specialist instructions for accessing such things as the stack pointer and the program counter were actually just instances of the general register instructions, so were really just aliases for the other instructions (you actually did not get to appreciate this unless you started to look at the generated machine code). In addition, a number of the instructions only used 8 bits of the 16 bit word, which allowed the other 8 bits to be used as a byte offset to some of the indexing instructions (contributing to your point about reducing memory bandwidth).
One other feature that was often quoted, but was not true of most early RISC processors was that they execute a majority of their instructions in a single clock cycle. This is/was not actually part of the definition (unless you were from IBM who tried to redefine RISC as Reduced Instruction-cycle Set Computer or some similar nonsense), although it was an aspiration for the early RISC chip designers. Of course, now they are super-scalar, and overlap instructions in a single clock cycle and execution unit, that is irrelevant.
Nowadays, it's ironic that IBM POWER, one of the few remaining RISC processors on the market actually has a HUGE instruction set, and more addressing modes than you can shake a stick at, and also that the Intel "CISC" processors have RISC cores that are heavily microcoded!
CISC processors predated the adoption of the terms CISC and RISC. While you could say that, for example, a 6502 microprocessor was an early RISC processor, it was not really the case. The first processor that was really called a RISC processor was probably the Berkley RISC project (or maybe the Stanford MIPS project), which pretty much branded all previous processors as CISC, a term invented to allow differentiation.
As a result, you can't really claim any sort of design ethos for a CISC processor. Saving memory was a factor, but I don't really think that it was important, otherwise they would not have included 4 bit aligned BCD arithmetic instructions, because these wasted 3/8ths of the storage when storing decimal numbers.
You can say the converse. RISC processors, especially 64 bit processors often sacrificed memory efficiency to allow them to be clocked faster.
The earlier 'classic' Alpha processors (before EV56) did not support byte or word boundary aligned reads and writes from main memory. In order to read just a byte, it was necessary to read the entire long-word (32 bits), and then mask and shift the relevant bits from the long-word to get the individual byte. This can make the equivalent of a single load instruction from other architectures a sequence of a load, followed by a logical AND, followed by a shift operation, with some additional crap to determine the mask and the number of bits to shift.
But you have to remember that in the space of a single instruction on an x86 processor, an Alpha could probably be performing 4-6 instructions (just a guess, but most Alpha instructions executed in 1 or 2 clock cycles compared to 4 or more on x86, and they were clocked significantly faster than the Intel processors of the time - RISC vs. CISC).
Writing individual bytes was somewhat more complicated!
I was told that this also seriously hampered the way that X11 had to be ported, because many of the algorithms to manipulate pixmaps relied on reading and writing individual bytes on low colour depth pixmaps.
On top of the normal Centigrade/Celsius/Fahrenheit issues, the author has interspersed Kelvin, with both Kelvin and degrees Celsius converted into Fahrenheit, but no conversion from Celsius to Kelvin (I know, subtract 273.15 from the temperature in Celsius).
Technically correct, but confusing, especially as it is too easy to read K as in Kilo if you're not paying attention!
That scheme (allowing DHCP to allocate addresses and hope that devices get the same addresses even when the lease expires) works until it doesn't, and then the consumer who didn't need to know how things work will be completely stuck when their port forwarding rules stop working.
Most DHCP servers on consumer grade routers allow you to reserve persistent IP addresses for certain MAC addresses. I don't see what is so difficult about setting up persistent addresses that will be fixed. After all, in order to set up port forwarding rules, one has to know something about IP and port addressing.
You have a point, but to be hacked, you need a vector to get to one of these devices.
If they are snug and secure behind a firewall (even one in a consumer grade DSL router), it will not be possible to even get to the device to attack it, regardless of how easy it is to hack. The reason why UPnP is being mentioned so much is that it is commonly used to expose the services of this type of device to the internet through a firewall.
Unless you can show that the devices were either on an un-firewalled network or directly connected to the Internet, you're going to have to come up with a way that the attacker could initially get to the device to hack it other than UPnP. Until you do, that is still going to be the most likely culprit.
Whether you like it or not, UPnP is a way for undisciplined devices to expose themselves. It's just a flawed service, and many knowledgeable people agree.
Probably WiFi connected room speakers, like the ones SONOS sell, and using UPnP to allow the music appliance to find them. Not my cup of tea, but whatever.
My speakers are connected to their amp via some old-fashioned 5A multi-strand lighting cable. Funny, I tried to buy some cable recently, and got the distinct impression that it was no longer available (at least as mains cable), I suspect because in the UK mains cable now needs to be double-insulated.
All I can get now appears to be specific 'speaker' cable, at stupid prices!
Totally agree re. uPNP and WPS, but if you want to set up the port forwarding rules yourself, you probably have to fix the IP addresses of the servers you want to port-forward to, either with manual IP addresses or fixed DHCP MAC-to-IP mappings.
Changing the password is a no-brainer that people do immediately anyway, isn't it? I even generate my own WiFi keys so as not to use the default, just in case it can be derived from some other information on the router, and hide the routers behind a Linux firewall and separate DSL modem.
The thing is, people I know ask why I do all this, when all they do is plug it all in, and press that little button on the router to register a device. "It's so much easier", they say.
If only I could directly implicate their network as being part of the botnet, I could show them the error of their ways...
...well more correctly IBM Spectrum Scale Storage, is a block based protocol (unless you're using the built in NFS bridge), putting the onus of working out where the storage for files is onto the client.
If you're taking about it working like a NAS, then you've probably come across it in it's SONAS storage appliance persona, not in it's GPFS client/server software defined storage persona.
"EARTHMEN, WE ARE PEACEFUL BEINGS AND YOU HAVE TRIED TO DESTROY US, BUT YOU CANNOT SUCCEED. YOU AND YOUR PEOPLE WILL PAY FOR THIS ACT OF AGGRESSION. THIS IS THE VOICE OF THE MYSTERONS. WE KNOW THAT YOU CAN HEAR US, EARTHMEN. OUR REVENGE WILL BE SLOW BUT NONETHELESS EFFECTIVE. IT WILL MEAN THE ULTIMATE DESTRUCTION OF LIFE ON EARTH. IT WILL BE USELESS FOR YOU TO RESIST, FOR WE HAVE DISCOVERED THE SECRET OF REVERSING MATTER, AS YOU HAVE JUST WITNESSED. ONE OF YOU WILL BE UNDER OUR CONTROL. YOU WILL BE INSTRUMENTAL IN AVENGING THE MYSTERONS. OUR FIRST ACT OF RETALIATION WILL BE TO ASSASSINATE YOUR WORLD PRESIDENT."
Secure, hell no.
One thing it allows is any internal device to knock inbound holes in your firewall, without your knowledge or approval.
I appreciate that without it, some consumers would have to learn something, but the downside is that all the IoT devices that sit inside home networks and use UPnP can potentially become a participants in a DDoS attack like this.
Do consumers worry about this? Well probably none of them understand what it is that caused the DYN DNS outage, and even less about whether their house was part of the cause.
But should we? Definitely yes, if we want to maintain a functional and usable Internet!
I run my firewall with UPnP disabled, so it works inside my network for device discovery, but the firewall can't be controlled, and there's not that much that either I or the other members of my family have noticed that doesn't work.
The problem with Shields Up! is that by default it only checks the reserved ports 0-1023.
You can use it to do custom scans, but the standard check will not check to see whether uPNP has opened up ephemeral ports through your firewall, and once these are set up, it could allow CnC channels to any devices.
But most edge-firewalls allow outbound connections to a co-ordination server anyway (it really would be a pain to have to configure individual ports on the firewall), and once a session is established, will allow return control requests (remember TCP/IP sessions are bidirectional) even without uPNP (never wondered how your network attached, print-from-anywhere printer works? Well, this is it).
Of course, it is necessary to get a foothold in the network for uPNP or outbound requests to be made, but who knows what is baked into the firmware of these IoT devices from China? I tent to run a Linux firewall, and do a sweep of the ports currently in use at the firewall, but it's difficult.
It's all a bit of a mess. I favour using the vulnerabilities themselves to run destructive code on the IoT devices to break them, but that is illegal in pretty much all jurisdictions.
This is just a collision of current and former acronyms. It happens all the time.
It's getting increasingly difficult in a particular field to come up with an acronym that is meaningful and can be pronounced as a word, because they've already bee used.
I play a game with my family that if they use an acronym in a conversation, I deliberately misconstrue what they've said by alternative expansion.
For example. ISA (these are all real)
Industry Standard Architecture
Internet Security and Acceleration (Microsoft ISA server)
Independent Schools Association
Individual Savings Account
International Standard Atmosphere
International Students Association
International Studies Association (not the same as above)
International Society of Automation
International Songwriters Association
International Society of Arboriculture
International Survey Agency
International Sign Association
International Sustainability Alliance
.. and there are others if only I could be bothered to go down the hit list.
The problem I have with trying to understand this technology is relating it to real world problems.
I think I can understand that you can store information in a qubit, and extract that information again, but what I find difficulty with is manipulating that information in a meaningful way, and extracting the result, which is the essence of what computing is about.
Mind you, I also struggled with Fourier Analysis, which formed the basis a now defunct branch of (analog) computing, which IIRC (from my University maths course decades ago) represent an observable artifact (like a complex waveform) as a sum of a series of more simple mathematical equations, which you can then manipulate using either algebra or vector mathematics to model how the artifact will behave under certain circumstances (although FA has alternative uses, I understand).
But I've just not seen something that describes how the data in the qubit is manipulated.
I can see something of what Destroy All Monsters is trying to say, in that it is the interaction of multiple qubits that enable you to get meaningful results from a combination of more than one piece of information, but I just cant see how this interaction is controlled. And without control, the whole field appears useless. Maybe I just don't understand what is the aim, the only thing I can see is it's not applicable to what we used to call 'general computing'. Your not going to be doing your word processing on a quantum computer!
On the subject of people needing to understand maths to be able to even approach the field, what a lot of people forget is that mathematical notation is just like any other jargon. If you don't even understand how the notation works, no amount of reiteration written in that notation will mean anything.
But then again, I realized a long time ago that there was a real ceiling on the amount of understanding I would ever achieve in maths once it got into apparently abstract areas.
It's funny. I've pulled more all-nighters in the last 6 months than I have in the previous 15 years!
The reason why I do it is because it needs to be done, and my kids are now grown up so that I can afford the disruption to my life that some of my colleagues who still have younger kids cannot.
I must admit I am accompanied by a significant number of 20 somethings who have not yet acquired responsibilities outside of work.
Oh, while I'm waiting for the time for my work, I do origami, not table football!
That is a good point. But the way I rationalise it is by considering the on-going employability of people in the UK.
All the time that tax, benefits, health and other infrastructure services, education etc. are funded within an 'arbitrary regional border', I believe that pay and skills should also come mainly from within that arbitrary border.
If it were the case that full movement allowed people at the lower end of the demographic spectrum to get worthwhile jobs in other countries, then it would be great. But what is happening, and will continue to happen, is that people move from poorer countries to richer ones, displacing the lower skilled locals from the workforce because they are prepared to work for lower wages than the locals.
This occurs in two ways. One is the obvious one where locals just don't find work because it's being done by people who are prepared to work for less. The second, and much more subtle one, is that businesses in the UK don't bother training people from the UK. They just bring them in from abroad, saving themselves all of the costs of training.
What this leads to is a de-skilling of the local workforce, and perpetuates the situation that businesses can't recruit skills from the local workforce, and then bring even more people in from abroad. It will become a self perpetuating issue, whilst all the time money could well be leached from the UK economy.
But it is not just the UK that is harmed. If you look at countries like Poland, Hungary and even Ireland, such a large number of their young people who have got skills marketable in richer countries leave that they are starving their own countries of the skills they need!
I saw a documentary on Ireland that stated some villages effectively don't have any residents between the ages of 18 and 30, because they've all gone somewhere else to find work.
I would love to see a totally egalitarian world, where the resources of the world are equally shared, but we are so far away from that, with no possibility of ever getting there without some world-changing event, that we cannot afford to consider it.
It's absolutely pointless having a country with a 'healthy' economy for the shareholders and owners of the companies, if the rest of the population is un-employed, un-employable, or are effectively wage-slaves of the rich.
The figures are there (just as a quick example). But it would make no sense, because the UK is run as a single economic region, with different areas generating different levels of product. If you split the country out, you will certainly find some areas actually running at a deficit, being propped up by London.
If you want to go down that route, maybe you should ask what Scotland would look like outside of the UK, now that oil revenues have fallen below their very optimistic budget calculations at the Scottish referendum.
On second thoughts, add in the SNP to a Remain coalition party in a General Election, and you may get closer to an overall majority, but it would still require a lot of people with disparate ideas campaigning together, and the resultant government would be squabbling amongst themselves about issues other than Europe.
You clearly don't understand a referendum, do you.
While this particular referendum on leaving Europe was not actually legally binding, I cannot see any government not implementing it, because it would simply crush any notion of the UK being a democratic country.
There is no way back. We have to leave. The only way that it could be avoided is by this government calling a general election before invoking article 50, and the election being won by a party explicitly campaigning on not leaving Europe.
I could see a centrist Labour offshoot campaigning in coalition with the Lib Dems. and possibly the Greens on this agenda, but I don't see that they would win a majority, although they could probably gain the largest share of the vote of all groups. But they would not have the clout to actually form a government able to carry out the policy.
It is unlikely that the Conservatives campaigning on such an agenda would win (it would show severe hypocrisy) and would split the party, so it would be as much political suicide for the current incumbent as calling the referendum on such a blunt question in the first place was for the former one.
But Teresa May has said that she won't do this, so it's moot.
Biting the hand that feeds IT © 1998–2019