If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

How to protect site from downloading?

Started by Fan_Tema, Aug 21, 2022, 04:34 AM

Previous topic - Next topic

Fan_TemaTopic starter

What measures can be taken to prevent websites built using raw HTML from being downloaded? Tools such as HTTrack Website Copier can easily extract all the content, which is something that needs to be avoided.

One possible solution could be to have expert cryptographers encrypt the entire main page code. While various scripts have been attempted without success, perhaps standard encryption techniques could provide a more secure option.
  •  


Seattle

Preventing 100% of website access is impossible, but it can be limited to a specific group of users.

To reduce bot traffic, one can check for the referer and user-agent so that only valid users have access to the website. Additionally, by checking headers and agents, users can be redirected to another page as needed.

However, these methods may not be foolproof, so other options like using cloudfare or hcaptcha can be considered. While this may negatively impact some users, it is effective in reducing bot traffic.

Lastly, text copy protection can also be used to prevent unauthorized copying of content from HTML and Bootstrap sites. By adding the following code to the body tag: <body oncopy="return false;">, copying through mouse or keyboard shortcuts will be disabled. This may discourage some users from copying content and lead them to seek information elsewhere.
  •  

yoginetindia

Copywriters can be a real pain, especially when they reproduce your content without permission. I've also had the same problem of finding hundreds of copies of my articles online. It's frustrating to see how easily they are able to copy content directly from the HTML code.

While I have tried using the code that was suggested to prevent unauthorized copying and highlighting of text, I found that it can still be accessed by pressing Ctrl + U as it opens up the page source.

To combat this, I did some research and found a script that prohibits text selection and copying. The script is inserted between the head tags and can be put in a separate JS file with the storage path specified on the hosting.

While this script may not be effective against professional copywriters who use software to create duplicate websites, it can at least make things more difficult for beginners who are just starting out.

<SCRIPT LANGUAGE="JavaScript">

 function preventSelection(element){
 var preventSelection = false;

 function addHandler(element, event, handler){
 if (element.attachEvent)
 element.attachEvent('on' + event, handler);
 else
 if (element.addEventListener)
 element.addEventListener(event, handler, false);
 }
 function removeSelection(){
 if (window.getSelection) { window.getSelection().removeAllRanges(); }
 else if (document.selection && document.selection.clear)
 document.selection.clear();
 }
 function killCtrlA(event){
 var event = event || window.event;
 var sender = event.target || event.srcElement;
 if (sender.tagName.match(/INPUT|TEXTAREA/i))
 return;
 var key = event.keyCode || event.which;
 if (event.ctrlKey && key == 'A'.charCodeAt(0))
 {
 removeSelection();
 if (event.preventDefault)
 event.preventDefault();
 else
 event.returnValue = false;
 }
 }
 addHandler(element, 'mousemove', function(){
 if(preventSelection)
 removeSelection();
 });
 addHandler(element, 'mousedown', function(event){
 var event = event || window.event;
 var sender = event.target || event.srcElement;
 preventSelection = !sender.tagName.match(/INPUT|TEXTAREA/i);
 });
 addHandler(element, 'mouseup', function(){
 if (preventSelection)
 removeSelection();
 preventSelection = false;
 });
 addHandler(element, 'keydown', killCtrlA);
 addHandler(element, 'keyup', killCtrlA);
 }
 preventSelection(document);
 document.ondragstart = test;
 document.onselectstart = test;
 document.oncontextmenu = test;
 function test() {
 return false
 }
 </SCRIPT>

  •  

Harry_99

Hi,
HTML was originally created as a completely open standard. Therefore, it was not originally intended to hide it from strangers who like to get on the buttons Ctrl + C. Information security experts I know told me over tea that viewing html code is not a crime.
You can use the methodology of executing code on the server, and give only the result to the browser. Sometimes this is useful.
  •  

brandsmith

Encrypting the entire main page code may provide some level of protection against casual attempts to download the content, but it would not be foolproof. Once the encrypted code reaches the client's browser, it would need to be decrypted in order for the website to function, making it susceptible to reverse engineering.

Instead, if you want to prevent websites from being easily downloaded, a more effective approach would be to use server-side rendering or dynamic web technologies. These methods generate web pages on the server and send the rendered content to the client's browser. This way, the raw HTML code is never directly accessible to the user, making it harder to download the entire website.

Another approach is to implement authentication and access control mechanisms. By requiring users to log in and verifying their credentials, you can restrict access to the website's content and prevent unauthorized downloading.

In addition, you can apply various security measures such as rate limiting, IP blocking, and bot detection to prevent automated tools like HTTrack Website Copier from accessing your website. Consider using Content Delivery Networks (CDNs) that offer additional security features to protect against scraping and unauthorized downloads.

1. Implement obfuscation techniques: Obfuscation involves intentionally making the code harder to understand or reverse engineer. You can use tools or libraries that obfuscate your HTML, JavaScript, and CSS code to make it more difficult for automated tools to extract the content.

2. Use anti-scraping services: There are specialized services that help protect websites from scraping and downloading attempts. These services employ various techniques like CAPTCHAs, IP blocking, and behavior analysis to detect and block scraping activities.

3. Employ Content Security Policies (CSP): CSP is a security feature that allows website owners to define where resources (e.g., scripts, stylesheets) can be loaded from. By implementing proper CSP rules, you can restrict the loading of external resources, making it harder for tools like HTTrack to download the complete website.

4. Leverage server-side technologies: Instead of relying solely on client-side HTML code, you can utilize server-side frameworks and technologies such as Node.js, Django, or Ruby on Rails. These frameworks generate dynamic web pages, making it more challenging for tools to capture the entire website's content.

5. Monitor and block suspicious activities: Regularly monitor your website's traffic and server logs for unusual patterns or excessive requests. Implement automated mechanisms to detect and block IP addresses or user agents engaging in suspicious activity.

Remember, while these measures can provide an additional layer of protection, determined individuals with enough technical expertise can still find ways to access and download your website's content. Thus, it's essential to assess the level of sensitivity of the information on your website and balance security measures with usability requirements.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...