Quantcast
Channel: Forensic Focus Forums - Recent Topics
Viewing all 20102 articles
Browse latest View live

Education and Training: PhD Cybercrime Topics

$
0
0
For my Master's Thesis I wrote about the legalities involved in allowing companies to hack back when attack (I termed it a Cyber Stand Your Ground law). Seems it got some traction as of late...

General Discussion: Memory Forensics (Volatility) - Dst port 445 to public IP

$
0
0
marcusplexus wrote: Any suggestions? Where do these IP addresses belong to? Perhaps they belong to Microsoft and and are part of their usual data collection procedures, called "Telemetry" and "User Experience Improvement". If I were you , I would investigate these addresses first. regards, Robin

General Discussion: Validation and decision making

$
0
0
tootypeg wrote: I would be really interested to have your feedback on this, any evaluation or additions, edits would be very much appreciated. You might want to follow some flow-charting/process standard (unless you are already following one I haven't seen before). <>-boxes are almost always used for decisions, for example ... (Already covered?) Add reference numbers to the boxes. Makes it so much easier to talk about 'decision D5', instead of the green box to the left of the yellow box near the center. It's a bit odd that testing (to unknown standards) trumps peer-reviewed publication. It's very much waterfall. The area of 'expand knowledge' should probably loop back to 'do we know enough to report?', which probably makes the outcome of 'expand knowledge' some form of publication. (It makes it less practical, but that is probably not relevant here.) Quote:: I know some of it is very much basic but I want to hone it to the point where the process is engaged with to prevent misinterpreted, erroneous content entering DF reports/statements. That requires a decision box 'Am I competent to make any or all the decisions in this flow chart?' before this sequence is entered. Not sure if 'evidence type' is a good starting point. Perhaps 'conclusions, inferences and assumptions' would be slightly better, as that makes it clear that it's something to do once a preliminary report is available.

Mobile Phone Forensics: S7 Edge secure startup

$
0
0
HD-Box could brute force that for you

General Discussion: Memory Forensics (Volatility) - Dst port 445 to public IP

$
0
0
Thanks for your replies: 1. I checked the article from FireEye. The artifacts are not present on the system 2. The IPs are not related to Telemetry. 3. MDCR: Good observation about MS networking. I don't understand this bit, I'll do some research : "Check repository for any eventconsumers that are not present on other machines." Still searching...

General Discussion: Validation and decision making

$
0
0
Quote:: Generically speaking I can see a "generic" issue with the reliance on either: 1) published (and thus peer reviewed) material 2) peer reviewing in general We all know how (particularly in the latest years) there are a number of published material that is (IMHO) very poor. The issue as I see it, is that often the Author is a poor experimenter and its peers are as poor as he is, or maybe they didn't review the article as they supposed to. If you prefer, a number of published articles (including those related to CF) are not reproducible. More generally, peer reviewing seems to be at a low, basically because finding such peers is not as easy as it seems. Then, a number of articles are very, very "narrow", so that it is rare that BOTH the conditions: 1) the peer reviewed material exists 2) the peer reviewed material actually applies to the specific OS, version, etc. are fulfilled. So, once excluded very "basic" knowledge and "unchanged" and re-known behaviours, when it comes to the more "difficult" parts the path through the peer reviewed material would be largely impracticable, and you are left only with the "experiment yourself" path. This latter is likely to be not doable (because of limited time/resources/etc.) or at least there is a great risk that the Quote:: experiment won't be "fully" or "fully and properly" executed, and since the results of the experiments (according to the diagram) are not verified by third parties or peer reviewed they somehow carry with them some less relevance/authoritativeness. Interesting point about peer-reviewed material and i agree in part. But if we cant rely on stuff like this we arguably have nothing. However to cover for this, I did add in the decision boxes 'reliably' peer reviewed because I see your point. Regarding the latter, in terms of evidence reliability, I know it is burdensome, but the alternative is essentially that people should include non-validated content in there report - leaving us in the position we are in now. Surely this is not good. We essentially need to build up a body of reliable knowledge before this process becomes more efficient. Quote:: My doubt is that if the flowchart is followed to the letter, this would take either endless time or too often produce a "unsafe to report" result. So maybe there is space for "levels of confidence in the report" Is this not part of the issue, it might end in unsafe because of the lack of work the field has done so far with regards to non-validating content - time to start? Quote:: Finally, supposing that the diagram should represent a sort of guideline, I believe that it should be added (of course as a mere, not-binding recommendation) that the results of the experiments should be published (so that they can be - formally or informally - be reviewed/commented/etc.) and hopefully become part of the peer reviewed material. Great idea!! Quote:: You might want to follow some flow-charting/process standard (unless you are already following one I haven't seen before). <>-boxes are almost always used for decisions, for example ... (Already covered?) Yes, this is just in draft, I will follow a standard for the final one. Quote:: Add reference numbers to the boxes. Makes it so much easier to talk about 'decision D5', instead of the green box to the left of the yellow box near the center. Will do. Quote:: It's a bit odd that testing (to unknown standards) trumps peer-reviewed publication. It doesn't, only when the peer reviewed material either doesn't exist for the scenario faced by the practitioner or that its not reliable? Quote:: That requires a decision box 'Am I competent to make any or all the decisions in this flow chart?' before this sequence is entered. Not sure if 'evidence type' is a good starting point. Perhaps 'conclusions, inferences and assumptions' would be slightly better, as that makes it clear that it's something to do once a preliminary report is available. Good point, I will add. any other stages missing etc? I mean, over all, in a perfect world, if followed properly, should this process flow prevent issues?

Mobile Phone Forensics: S7 Edge secure startup

$
0
0
What did you use to brute force ?! 7+ PINs are rare, since they are hard to type - while driving for example Are you sure it is not asking for password instead of PIN ?!

General Discussion: Validation and decision making

$
0
0
Process flow has now been updated, original post edited and also reported HERE

General Discussion: Memory Forensics (Volatility) - Dst port 445 to public IP

$
0
0
Saw the IP address now, here is the Whois: https://apps.db.ripe.net/db-web-ui/#/query?searchtext=%2080.106.26.167#resultsSection And yes, 445 is MS Directory services. Should never leave the network. What i meant is that something could have infested your machines that does not show up, a place to hide out of sight of disk/processes is using eventconsumers, Fireeye got more info: https://www.fireeye.com/blog/threat-research/2016/08/wmi_vs_wmi_monitor.html Normally an APT tactic, but others are quick to adapt TTPs. You'll need this file: C:\Windows\System32\wbem\Repository\OBJECTS.DATA Part from that, maybe a rootkit. Can only speculate without sitting there and knowing your environment. For now, just block the egress point in your network so no more connections can be established.

Classifieds: WTB: Cellebrite dongle or services

$
0
0
I have three Cellebrite Touch UFED available with all accessories. 2,500-6500 USD. Message for more info.

General Discussion: Validation and decision making

$
0
0
Some additions with warnings for confidence measures and catches for testing/ validation and competence. - HERE its just a draft, i can see typos in this

General Discussion: Mobile Forensics Discord Server

$
0
0
I wanted to provide a link to an active and growing Discord server where we currently have 315+ other investigators, forensic vendors, and lots of channels and resources to help with your investigations. When you arrive in the server, please include a little snippet about yourself. We assign all members a role such as Private Sector, DFIR Student, or a role specific to your LE agency. This information can be provided in the #introductions channel when you arrive in the server. Hope to see some of you there and happy forensicating! https://discord.gg/kr7AFjf

Classifieds: T35689iu Forensic Bridge Write Blocker Tableau

$
0
0
I have 2X Hardware Write blockers with cables Model T35689iu . Working perfectly just not longer have use for them. I was using a specific USB3.0 internal card with these for hookup (not included). I can maybe look up the model number if needed. Was looking for $400 Each O.B.O PM me if interested.

Forensic Software: XtremeForensics (ISeek/ILook) - Your Opinion?

$
0
0
Thanks for the info. I will check out the other thread.

Classifieds: WTB: Cellebrite dongle or services

$
0
0
This is from over a month ago and someone bumped it to try and sell their machines.

Education and Training: Would appreciate feedback on dissertation methodology

$
0
0
Thanks for your replies, I've considered some of the points made. In saying I would like to create a real world scenario, this is referring to the data sets that will be created and the deletion of some of this by using the Recycle Bin. I'm aware some users may use advanced wiping software to hide any incriminating data, but I feel the circumstance of simultaneously wiping all data has already been thoroughly covered by previous research. The use of synthetic data has also been greatly used in past studies. What happens to the other half of the data and how do these scenarios relate to real world? When only half the user data is deleted, the other half will remain on the drive. Should data need to be moved around by the SSD controller so whole blocks are available for erasing, leaving data on the drive will make this more difficult than if all files were marked as not in use. I would like to see if there is a notable difference in the time it takes for data to be wiped between these two circumstances. My rationale behind this is that if someone has data to hide, they may only attempt to remove the incriminating files from the drive. In leaving behind non-suspicious data the drive still appears to have been used and appears unsuspicious. Previous research used multiple copies of the same few files to fill a drive. By using different files of different sizes, I think this may decrease the likelihood of all data being in whole blocks that can be wiped immediately by GC, meaning that it will be necessary for some data to be moved around. The requirement of moving data around to create whole blocks for deletion may be affected by how full the drive is, meaning some data may not be wiped in the time allowed. Regarding how data deletion is performed by Recycle Bin or if there is any relationship between file types/size and how they are selected for GC. I will be recording these details of all files that are used in the tests and look to see if anything can be determined in regards to this from my results. How were delay intervals chosen? Based on previous research seen, the process tends to begin immediately and acts fast. I'm not decided on whether I should begin the interval following the start or completion of the delete. My initial thinking was that should someone arrive at a scene and find files in the middle of deletion they would want to stop this. What is the hypothesis or research question? The purpose of the research is to determine if GC acts as aggressively in certain scenarios as it does when a full drive is marked as ready for wiping. The majority of previous research I have seen fills a drive with data and formats it, making the process easy for GC. Should the drive be almost full and the majority of data is marked for deletion, there should be no issue finding whole blocks for wiping. If only part of the data is marked for deletion, there may not be enough whole blocks available for deletion or empty blocks available to create whole blocks for deletion. Does this noticeably increase the time taken for all files marked for deletion to be unrecoverable verses when the drive is only partially filled. I will look into a software write blocker instead of using a hardware blocker. The article covering TRIM exceptions was very useful and I will consider if any part of my process will be affected by any of the circumstances highlighted in this and the follow up article.

General Discussion: Good discussion re disclosure of digital evidence in the UK

$
0
0
In addition to the oral evidence to the UK House of Commons Justice Select Commitee already referred to, here is myearlier written evidence. https://goo.gl/YUXQfm If rather than watching the tv version of the oral evidence you'd like a transcript, here it is: http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/justice-committee/disclosure-of-evidence-in-criminal-cases/oral/83096.pdf I have also written a blog about the problems of "too much disclosure" in relation to rape trials: http://pmsommer.blogspot.com/2018/05/ Some people have suggested that "AI" might provide solutions to disclosure/discovery: here are my comments: http://pmsommer.blogspot.com/2018/06/can-artificial-intelligence-solve.html Finally, the perennial issue of accreditation. I have written about this before but here is pre-print of what is appearing in the academic journal Digital Investigation: https://goo.gl/ynFUDd

General Discussion: Good discussion re disclosure of digital evidence in the UK

$
0
0
Peter_Sommer wrote: If rather than watching the tv version of the oral evidence you'd like a transcript, here it is: http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/justice-committee/disclosure-of-evidence-in-criminal-cases/oral/83096.pdf Thank you for the transcript, very interesting. Peter_Sommer wrote: Some people have suggested that "AI" might provide solutions to disclosure/discovery: here are my comments: http://pmsommer.blogspot.com/2018/06/can-artificial-intelligence-solve.html Quote:: Can artificial intelligence solve the criminal disclosure problem? No. https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines jaclaz

General Discussion: Validation and decision making

$
0
0
Taking into account the feedback which I have gladly received, this is the completed draft - HERE Im hoping this captures everything involved in generic decision making when deciding whether to report something. There are markers for measures of confidence and competence at key markers also.

Mobile Phone Forensics: S7 Edge secure startup

$
0
0
pcook8198 wrote: I totally agree, 7+ digits seems a little to long as studies show 11 digits is roughly the max the human mind is capable of. I'm not sure what studies you're referring to, but I would expect them to say '11 *random* digits' as well as specify clearly what sample population the observations are valid for. Most are valid only for students at a particular university... In a file with cracked passwords that I have collected (thus very probably PINs that someone has remembered), I find the majority of PIN entries (i.e. digits only) to be 11 or less, as you state, but I have more than 6000 16-digits PINS, and around 100 24-digit pins. The longest are 255 digit PINs, but as some are all the same digit ('00000...', '1111...' and '5555...') -- I suspect an effect of a max-length of 255 characters in PIN together with auto-repeat keyboard: press key until it beeps (or for x seconds, leading to string truncation), or something like that, but no exceptional memory. Very many long PINs have an initial sequence of '0000...', followed by a 7-digit (or longer) more random sequence. ('1111...' are also present, but less common.) So throwing all remaining long PINs found in any of the 'standard' password leaks (such as the rockyou leak files, for example) might be an idea. Or ... start with 'numbers' from personal relation: social security numbers, say, or phone numbers or dates ... or just possibly credit card numbers. (I would do all 8-digit dates before I did any more random 8-digit sequences, for example, and I might start by looking at 'nearby' years first). And possibly extend with '0000...' .
Viewing all 20102 articles
Browse latest View live