^ that one. Though given the sets outlined in the article I'm not sure they're actually assessing accuracy, it sounds more like the 99.995% figure is specificity. To get accuracy they have to factor in how common positives and negatives are.
For the original question (and my earlier reply went astray, so this is the even shorter version): there are different figures of interest. Changing one part of your detection may change some more than others, and overall success rate is still dependent on how frequent positives (in this case, terrorist videos) are relative to negatives (cat videos).
Sensitivity. How frequently the test reports positive given only positives. 0.94something here. You can trivially make it 1.0 by always reporting a positive.
Specificity. How frequently the test reports negatives given only negatives. 0.9995 here. Trivially 0.0 by always reporting negatives.
Obviously in most cases you can make a tradeoff of one for the other. Receiver operating characteristic curves are usually used to illustrate that. Flip a coin every time and both your specificity and sensitivity are 0.5 (as is accuracy, which is usually more complex), any random choice that doesn't look at the input sits on a diagonal line on the ROC plot.
Accuracy is a composite, but a more complex number. As Mycho said, Accuracy = (True Positive + True Negative) / (Positive + Negative). Positive plus negative is just all your samples, but true positive depends on how many positives there are and sensitivity, true negative depends on how many negatives there are and specificity, so your accuracy figure depends on the ratio of real positive and negatives in the input. If the vast majority are negative, then sensitivity has very little effect on accuracy, if most are positive then specificity has very little effect.
This tends to come up in medical screening, https://en.wikipedia.org/wiki/Sensitivity_and_specificity two numbers not much talked about are positive predictive and negative predictive value. Both of these mean if you select samples (videos, people) from the general population (youtube, a screening population) based on a test, how much does the ratio of positives (terror videos, people with a particular disease) change relative to the population. A rare disease and a test that is not sufficiently specific may still end up selecting a majority of people who do not have that disease, even if sensitivity is incredible.