I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet.
Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy. Any image search is already "content" made by others. Same with all the social media and forums like this. All this content produced doesn't COME from the people running it - it usually comes from people giving it away. The company merely facilitates the arrangement of displaying it and keeping it sorted. None of them "create" content so if you keep all the content from them... how will they survive when they can't act as the packager.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. How could the guy who created the DOM and HTML abandon them for a Facebook product??? That one had me scratching my head. Giving up the DOM means giving up all the semantics already in place for SEO. That's dopey and would mean you'd have to come up with new signalling for search engines to scrape and sort content. It reminds me of people that want to use Canvas instead of HTML (because flash is dead). There is so much you give up when computers have had HTML to sort with since oooooh about the 90's.
Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure. If they are just laying on top of the existing TCP/IP network then their data will get sucked into storage just like all the normal internet data so isn't protected any deeper. Just make sure it comes with encryption. That's probably the most important feature these days - some level of obfuscation that requires time to re-open versus just plain text that can be sorted earlier.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet.
Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy. Any image search is already "content" made by others. Same with all the social media and forums like this. All this content produced doesn't COME from the people running it - it usually comes from people giving it away. The company merely facilitates the arrangement of displaying it and keeping it sorted. None of them "create" content so if you keep all the content from them... how will they survive when they can't act as the packager.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. How could the guy who created the DOM and HTML abandon them for a Facebook product??? That one had me scratching my head. Giving up the DOM means giving up all the semantics already in place for SEO. That's dopey and would mean you'd have to come up with new signalling for search engines to scrape and sort content. It reminds me of people that want to use Canvas instead of HTML (because flash is dead). There is so much you give up when computers have had HTML to sort with since oooooh about the 90's.
Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure. If they are just laying on top of the existing TCP/IP network then their data will get sucked into storage just like all the normal internet data so isn't protected any deeper.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet.
Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy. Any image search is already "content" made by others. Same with all the social media and forums like this. All this content produced doesn't COME from the people running it - it usually comes from people giving it away. The company merely facilitates the arrangement of displaying it and keeping it sorted. None of them "create" content so if you keep all the content from them... how will they survive when they can't act as the packager.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. How could the guy who created the DOM and HTML abandon them for a Facebook product??? That one had me scratching my head. Giving up the DOM means giving up all the semantics already in place for SEO. That's dopey and would mean you'd have to come up with new signalling for search engines to scrape and sort content.
Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure. If they are just laying on top of the existing TCP/IP network then their data will get sucked into storage just like all the normal internet data so isn't protected any deeper.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet.
Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy. Any image search is already "content" made by others. Same with all the social media and forums like this. All this content produced doesn't COME from the people running it - it usually comes from people giving it away. The company merely facilitates the arrangement of displaying it and keeping it sorted. None of them "create" content so if you keep all the content from them... how will they survive when they can't act as the packager.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. How could the guy who created the DOM and HTML abandon them for a Facebook product??? That one had me scratching my head. Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure. If they are just laying on top of the existing TCP/IP network then their data will get sucked into storage just like all the normal internet data so isn't protected any deeper.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet.
Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy. Any image search is already "content" made by others. Same with all the social media and forums like this. All this content produced doesn't COME from the people running it - it usually comes from people giving it away. The company merely facilitates the arrangement of displaying it and keeping it sorted. None of them "create" content so if you keep all the content from them... how will they survive when they can't act as the packager.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet. Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy. Any image search is already "content" made by others. Same with all the social media and forums like this. All this content produced doesn't COME from the people running it - it usually comes from people giving it away. The company merely facilitates the arrangement of displaying it and keeping it sorted. None of them "create" content so if you keep all the content from them... how will they survive when they can't act as the packager.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actually decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet. Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure.
I was taught that the internet network was originally made (by the government) as a way to have mass redundancy (should communication be cut in areas - like nuclear war).
That's why TCP/IP packets check if they have been delivered instead of the existing UDP packets that just send info and ignore if it worked. The redundancy was also existent in the root servers. There isn't just one source, it's multiples (used to be about 12 root servers).
So the route you take from and to a resource has alternate paths it can take because allllll these computers are online and routing, the packets sent can be double checked, and the translations from domain (ex. thedonald.win) to ip (ex. 111.111.111.222) were also existent in multiple places because you can set those in your host file on a single computer, for a network as DNS, then again all the way back to checking the root servers.
These names were involved with the original network - Darpanet, Arpanet, National Science Foundation (they were the last ones to get it once the govt. was done w/it and I think they actual decided it could be good for public use).
I looked at the new project and it looks like his concept is to give your access out to sites but there are problems that keep coming up like... how do you control scraping... how do you ensure someone doesn't keep residual data or resell it... so far it looks to me like he is expecting honesty from all parties and that's a challenge to meet. Google dropped their "Don't be evil" slogan long ago and Facebook openly yoinks privacy.
I thought about learning about his APIs but then I saw that they were using React and that disheartened me because React was made up by Facebook and I avoid it. Once I saw React I jumped back so did not investigate their server setup - if they aren't using similar redundancy then they lose the best part of the TCP/IP structure.