Hacker Newsnew | past | comments | ask | show | jobs | submit | kr7's commentslogin

You do realize that is exactly why alt-righters believe there is bias against white people?

"We need to protect minorities and women from the white male oppressors" is basically the slogan of the identity politics wing of the left.


> "We need to protect minorities and women from the white male oppressors" is basically the slogan of the identity politics wing of the left.

No, that's something white nationalists say to each other; I don't see it elsewhere (in any significant quantity). There is overwhelming evidence of discrimination against people who are in minorities and against women, and as these people are largely excluded from power due to that same discrimination, they have little ability to protect themselves. If we believe in liberty, opportunity and meritocracy, and basic morality for everyone, then everyone should do something about it.

For the most part, only white nationalists say white-skinned people are discriminated against (with any significant scale or impact). There is no evidence of it that I've seen and a look around SV and the IT industry, corporate offices, governments and legislatures of every Western nation, college campuses, and any other locale of wealth, power and privilege shows how absurd the idea is. But the argument is not about discrimination, it's propaganda that is about creating hatred, trying to get people to see everyone as friends or enemies based on skin color (which is perhaps why white nationalists think there's some sort of bizarre us vs. them team competition), and providing a (transparent) excuse for racial hatred. Few people are fooled.


You say "we need to protect minorities and women from the white male oppressors" is not an accurate representation of their viewpoint.

Then you proceed to say that minorities and women are discriminated against, and we need to do something about that. Who is doing this discrimination, then? You point out that white-skinned people tend to hold power in (traditionally white) Western nations. I would say that my description is accurate.



In section 3.1 they say it is a performance issue.


Then people will just make novelty websites to point out that 1/7 + 2/7 ≠ 3/7.


This is solved by the Scheme numerical tower, which prefers exact representation​s (including exact rationals) unless inexact representation is explicitly requested or forced by an operation that doesn't support exact results.


The problem is remembering to only input rationals :) So instead of doing (iota 100 0.1 0.1), you do (iota 100 (/ 1 10) (/ 1 10)). That does not work in chicken for some reason.

I don't know if precomputation is ever guaranteed for those things, but otherwise it would be neat to be able to input rationals directly into the source.

Edit: So, this is scheme standard discovery week: apparently inputing 1/10 works just fine. I can't believe I missed this.


But in decimal floating point this is also solved: 1/7 + 2/7 = 3/7

Please research decimal floating point first..


That particular example happens to work, but 1/7 + 1/7 != 2/7.

DecFP is not magic. You still need to know that you're dealing with limited precision numbers under the hood.


You are right, that example doesn't work. I thought the rounding and normalization described in the standard may fix all these cases by itself but there I was wrong.

But at least all problems that could happen on a financial application are solved with decimal floating points (where you will only want to use rationals in finite decimal form like 0.01)

And even your example can be made working pretty easily:

  #include <decimal/decimal>

  int main(int /*argc*/, char **/*argv*/)
  {
    using namespace std;

    using namespace decimal;
    decimal128 d1 = 1;
    decimal128 d2 = 2;
    decimal128 d7 = 7;
    uint64_t conversionFactor = 10000000000000000000ull;
    cout << decimal128_to_long_long(d1/d7 * conversionFactor) << endl;
    cout << decimal128_to_long_long(d2/d7 * conversionFactor) << endl;
    cout << (decimal128_to_long_long((d1/d7+d1/d7) * conversionFactor) ==
      decimal128_to_long_long((d2/d7) * conversionFactor) ? "yes" : "no") << endl;
  }

  1428571428571428571
  2857142857142857142
  yes

That was done using the gcc included decimal types. And if you look e.g. into the intel dfp library readme, you see lots of functions which will allow you to do the comparison you wanted to do: https://software.intel.com/sites/default/files/article/14463...


> But at least all problems that could happen on a financial application are solved with decimal floating points (where you will only want to use rationals in finite decimal form like 0.01)

You still get cancellation if the magnitudes differ by large enough an amount. This is a problem inherent to floating point arithmetic, using a decimal format instead of binary does not save you from that.


16 Psyche is far to big to move anywhere.

16 Psyche's mass is 2.27·10^19 kg and its orbital speed is 17.34 km/s.

Earth's orbital speed is 29.78 km/s.

So a perfect orbital transfer (not even possible) to get 16 Psyche into Earth's orbit would take an impulse of at least (29780.0 m/s - 17340.0 m/s) · 2.27·10^19 kg = 2.82388·10^23 kg·m/s.

A Saturn V rocket produces 3.51·10^7 N of thrust.

So it would take one Saturn V rocket 8.0452·10^15 seconds to move 16 Psyche, or 2.5511·10^08 years.

If we had 1 million Saturn V rockets, it would take just over 255 years.


The title is somewhat misleading. The suburbs are not dying; in fact they are growing [1][2][3][4] and this article makes no claim to the contrary.

The article is saying that suburbs are becoming more like urban areas.

I like the author's evidence that housing prices are falling:

> In that same city in 2012, a typical McMansion would be valued at $477,000, about 274% more than the area's other homes. Today, a McMansion would be valued at $611,000, or 190% above the rest of the market.

Up 28% in price - must be dying!

[1] http://time.com/107808/census-suburbs-grow-city-growth-slows...

[2] http://www.citylab.com/housing/2016/03/2015-us-population-wi...

[3] http://www.businessinsider.com/americans-moving-to-suburbs-r...

[4] https://www.forbes.com/sites/joelkotkin/2013/09/26/americas-...


The point of that comparison is separating the value of the lot from the value of the house sitting on that lot. The property value isn't going down, but the value of the home is dropping considerably.

The misleading thing may be that this is just standard depreciation, although I wouldn't be surprised if houses in the 3000-5000 sq.ft. bracket were depreciating faster than smaller houses. (Though it seems likely these numbers they stated were the simplest way they could find to say that they are depreciating at an unusually fast rate.)


I'm not sure I follow that logic. Lot value is difficult to assess and frequently wrong from an assessment standpoint.

The cost to tear down a McMansion that isn't depreciated way is very high, making the lots value questionable. Additionally, since tract housing is often identical in most ways, if your neighbors homes are going to shit and depreciating, that will impact the lot value too.


I think by "dying" they mean "hip yuppie white Millennials don't live there [yet]." Nobody else counts.


The entire series has such hyperbolic language. I know that's BI's thing, but to treat the closing of shopping malls as a cultural apocalypse is surreal.


That struck me too. Pretty amusing. I think there is definitely a trend against the mis-happen McMansion, though. I attribute it to times when home building was booming and demand was outstripping supply for homes of that size. The result was that people were willing to accept, or just didn't even know, that the architecture was a little "off". So when space > aesthetics in a buyer's mind "good enough" was, well... good enough.


Ok, we changed the title to a subheading from the article.


The title is not misleading: The suburbs as we know them are dying.

I think you are misreading it to mean something it does not say.


Sure. Not clickbaity at all. Let's make a new article: Computers as we know them are dying.

Edit: You might want to revise your definition of misleading. Misleading doesn't mean something is factually false. It means it's true but is presented in a way to lead you into believing something else. So you can feed a narrative without lying.


Computers as we know them are, in fact, dying. This is a genuine pain point for some people, while being an opportunity for others.

Click-baity and misleading are not the same thing. Having a title that gets people to look at all is necessary to get traffic. This is true even on HN, in spite of how much people decry the evils of click-bait titles. It is incredibly hard to title things excellently well, such that it gets traffic but won't get labeled as "evil, nefarious click-bait with some dirty agenda" by the HN crowd.

I do freelance writing and I blog and I submit stuff to HN regularly. My view of this is not rooted in stupidity. It is rooted in knowing that click-bait titles work to get views and can get your article flagged to death even if the article per se is an excellent piece of writing, yet trying too hard to not be click-bait can mean you get very few views and no upvotes.

Titling things well is hard. I see nothing nefarious in how this is titled and I have too much firsthand experience with how incredibly critical HN is. The criticisms here are to the point of being neurotic and cranky. It is not merely a case of placing a high value on excellence.


>Computers as we know them are, in fact, dying.

My point exactly.

I was not critising click baiting, or the article. I was simply pointing out that the title was in fact misleading and exaggerating by using semantic tricks. I used the term click bait because in that particular case, the author was not trying to lie, but rather simply make their title have more impact.


Telling me that I might want to revise my definition of misleading is essentially calling me stupid. You aren't the OP to whom I replied, the mods changed the title already, AND my first comment was down voted into the negatives. So, this looks like a gratuitous personal attack to me. You doubling down looks even more petty.

It is no wonder dang feels HN has a civility problem and hoped he could cure it by doing a political detox week, which did not work because it wasn't the issue.


I'm sorry if you got downvoted because of me. My cynism is probably the cause.

However, if you think having an argument is calling you stupid and is a personal attack then this is your own problem.

I had upvoted your comments and I liked your reply. It doesn't mean I agree with you, and it doesn't mean Im here to get you neither.

Article titles are and will always be a problem on HN. This won't change, all we can do is try to point out when there is a problem with them and then immediately complain when the mods editorialised them a little bit much.


Arguing and calling someone stupid are not the same thing. I can make the distinction. Telling me it is my problem is another personal attack. I stand by my statement that your remarks here fail to meet a standard of civility.


Though for the purposes of comparisons, -0 and +0 are the same in IEEE 754.

So the expression "x >= 0.0f && x <= 1.0f" is true for x = -0.


The solution is to compile with SSE2 on x86. (flags: -mfpmath=sse -msse -msse2)

On x86-64, the compiler should default to SSE2.

SSE2 is ~16 years old so compatibility shouldn't be an issue.


Technically, you only actually need the instructions from the original SSE set to do floating point operations. SSE2 adds a bunch of really useful integer floating point instructions.

But the only extra cpus that gets you is the Pentium III, AMD Athlon XP, and AMD Duron.

SSE2 is supported on every single x86 cpu released after those, such as the Pentium 4, Pentium M, and Athlon 64.

It's a real shame that people are still using CPUs that don't support SSE4, such as the AMD Phenom and Phenom II cpus, otherwise everyone would have moved to exclusive SSE4.


SSE1 is single-precision only. SSE2 added double precision.

So the bug will still appear for 'double' using just SSE1.


Some Atoms only support up to SSSE3 too.


It does work, though not as well as using an integer multiply.

The approximate latencies for Skylake are:

    div --> 26 cycles
    cvtsi2sd + mulsd + cvttsd2siq --> 6 + 4 + 6 = 16 cycles
I did a quick (and imperfect) microbenchmark, got these results:

    Real integer division (-Os) --> 1.392s
    FPU Multiply (-Os)          --> 0.243s
    FPU Multiply (-O2)          --> 0.197s
    Integer Multiply (-O2)      --> 0.164s
The code:

    #include <stdio.h>

    int main() {
        volatile unsigned x;
        for (unsigned n = 0; n < 100000000; ++n) {
    #if 1 /* Change to 0 to use FPU. */
            /*
            Compile with -Os to get GCC to emit div instruction.
            -O2 to emit integer multiply.
            Clang emits integer multiply, even with -Os.
            */
            x = n / 19;
    #else
            /* Use the FPU. */
            x = (double)n * (1.0 / 19.0);
    #endif
        }
    }


Is it correct? What is x for n < 19 for example?


It works for 32-bit unsigned integers and double precision floats.

For n < 19, "(double)n * (1.0 / 19.0)" evaluates to a double between 0.0 and 1.0, then it is truncated to 0 when it is implicitly converted to unsigned int.

Since there are only 2^32 values for 32-bit integers, it is possible to test all values in under a minute:

    #include <stdio.h>
    #include <stdint.h>

    int main() {
        uint32_t n = 0;
        do {
            uint32_t a = n / 19;
            uint32_t b = (double)n * (1.0 / 19.0);
            if (a != b) {
                printf("Not equal for n = %u\n", n);
            }
            ++n;
        } while (n != 0);
    }


That page has a warning at the top:

> IMPORTANT: Useful feedback revealed that some of these measures are seriously flawed. A major update is on the way.

Looking over the results, some of the numbers are off.

On Intel CPUs, FP multiplication is faster than integer division. Might not be true on ARM CPUs which generally have slower FPUs.

On Skylake, for example, 32-bit unsigned integer division has a 26 cycle latency with a throughput of 1 instruction / 6 cycles, while 32/64-bit floating point multiplication has a 4 cycle latency with a throughput of 2 instructions / cycle.

Source: http://agner.org/optimize/instruction_tables.pdf


The crucial point, however, is that while FP multiplication is faster than integer division, converting between a floating point and an integer is very slow: cvtsi2ss and cvtss2si have a latency of 6 cycles each. This adds a latency of 12 cycles for each of these multiplications.

For divisions by a constant value that don't easily decompose into shifts you can fall back to multiplication by a magic constant which is the integer reciprocal. (This is also something compilers do and is what's being explained in the article.)


Multiplying by the integer reciprocal only works if the dividend is an integer multiple of the divisor.

What's being explained in the article is multiplying by a fraction the value of which is close to the rational reciprocal of the divisor, and where the denominator of the fraction is an integer power of two (so dividing by the denominator can be done with a shift).

The fraction in this case is (2938661835 + 2^32) / 2^37.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: