The main reason to use TDMA or CDMA is for SHARING. It allows lots of phones to use the same frequency at the same time (multitasking for comms) because there's more capacity in the stream than a single user needs.
Researchers at Rice University have pulled a neat trick of noise cancellation which they say could double the throughput of wireless systems, by allowing full-duplex communications using a single frequency. As El Reg readers surely know, wireless systems use either time- or frequency-domain multiplexing (or a mixture of the …
Have I heard this story before?
Yeah, I seem to remember it too.
There's been a couple like that lately.
Yes you have - it is called CDMA
You can do that with CDMA today.
You just feed the transmit code into the same algorithm which separates the logical channels in receive.
You do not even need two antennas. You can do the entire thing purely at signal processing level. It will not come three however - you will get a hard range limitation because of how long do you have to keep the transmit code sequence around to feed it into the algo.
This approach however does not double the throughput. You use up codes for transmit out of the same code space that was used to receive and the overall bandwidth of the frequency band remains the same.
When will it be defined as a standard and used?
One thing that does seem a possible issue would be when you get reflections, ie ghosting. This whilst managable one way becomes a whole new area when you share a channel for duplex communications like this. Also when it comes down to handshaked transmissions combined with possible ghosting you see potentual area's were this will degrade transmission. I'm sure some radio guru could outline other area's of possible contention with the flow of data using this approach but either way some sort of fallback needs inplace and given that fallback is the standard currently then its one of those rare situations were accomodating legacy equipment would also negate not having a fallback into any standard/design. Least for early days of adoption.
Another thing is speed isn't everything and whilst this would free up radio space it wont save me any battery power in my transmission or my reception, indeed if anything I can only see this adding albiet slightly to that. You could say you will have more radio space so less channel hopping and other user transparent overheads in commnications. But its one of the area's were on paper it will use a bit more power but in practice might save enough to cover the overheads and then some - early days. But still good stuff at least.
So another 3 more posts about this tech until there is anything new to report? Or is there a ETA on a standard so it can be adopted and royalties paid etc etc and it gets unto use? Though its more of interest to the teclo's of the world than for consumers who already pay for more radio channels than they get to use as it is :).
Every local reflection of the transmitted signal is going to add another copy at another phase that will want subtracting.
And the front end is going to have to remain linear over a huge dynamic range to avoid intermodulation (mixing) with arbitrary phase products.
It still sounds to me like something you can get working on a bench, but not in real life.
(Although I do believe some of the satellite-pirate phones do this, albeit with a very controlled signal path (no multipath, no local reflections) and a stable loss budget, to steal access in the guard bands)
Yes it's an old thing
And yes, it can be done in a multitude of different ways. And no, none of those, including this one works particularly well, because nearby reflections of your own signal are so much stronger than whatever signal you are trying to receive.
This just looks like a very crude version of MIMO.
There's already some pretty solid mathematics and a large amount of experimental work been done to work out how much additional information (above the traditional Nyquist limit) you can squeeze into a given frequency band using multiple antennas. I'm not really clear what this is supposed to show. It doesn't seem to address any of the real-life problems.
Much like feed-forward amplifier distortion cancellation techniques, it is fine in theory but doesn't deliver much in practice because real world semiconductors don't completely conform to their simplified mathematical models and have non linearities and distortion and noise. That's before even considering local reflections, intermodulation in other transmitters on the same tower, splatter from rusty supports forming diode junctions, etc etc etc. It can be achieved across a room, but it's far more difficult to get the performance required for a real 35km radius cell.
What he said
OK, maybe I'm pointing out the obvious here, but doesn't the "A" end know what it has transmitted without listening in? Surely it just needs to "remember" what it said and remove that from the received signals.
(coat on, off to the patent office)
Bin there, done that
and it does not work.
There is a huge difference between the transmitted and recieved signal, and this means the cancellation signal must be utterly precise...over 90dB to be of any use. That is achievable in the lab, but is just impossible in the real world. We have tried a very similar setup and found that even having someone wandering near to the antenna can destroy the cancellation.
I know of one company who claim they can recieve signals whilst simultaneously transmittting a jamming signal, and they were using exactly this technique. They were very coy about their cancellation ratio, with good reason when we found out the cancellation was about 60dB at best. I can probably spit further.
Well, we pooh-pooh it, but...
I remember a conversation in 1996 or so at Nokia, Camberley with a senior GSM engineer, who firmly believed that CDMA wouldn't ever work, because (amongst other things) the 'near-far' problem, where a 'loud' phone would stifle the cell.
Solved by having power-correction sent to the moblie at 1,500 times a second.
Ingenuity abounds and astounds!
They should apply techniques like this to speech, then we could have phones with speakers on them (or "speakerphones" as I call them) without getting feedback.
your analogy is good, if mischievous. To put the numbers in, a GSM phone near its range limit will be putting out +30dBm, or 1 Watt or so, and receiving at -90dBm. this is a whopping 120dB of rejection required, just for parity between signal and interferer, you would need more.
In Audio terms, this means cancelling a speakerphone signal equivalent to a jumbo taking off, whilst listen to a mouse.
I think they might well do it - given that they use 2 antennas and some geometry to take out half of it (60dB) leaving the rest manageable with today's RF mixer linearity etc. Normally though, a near doubling of cost, for - at best - double the bandwidth, plus restrictions, does not a business case make.
IIRC, Moon<-> Earth is about -144dB .
But that works...unless you're a conspiratist, natch...
So what you're all saying is.
It's been done before (often).
It can give good results.
But not *quite* as good it's supporters say they are and people *need* them to be useful (IIRC 90db is roughly 2^20 bits of range) IRL.
Of course it could be that these people know this and have dug a bit further into it to find a new wrinkle.