url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
http://search.cpan.org/~nuffin/KiokuDB-Backend-CouchDB-0.04/lib/KiokuDB/Backend/CouchDB.pm
code
KiokuDB::Backend::CouchDB - CouchDB backend for KiokuDB KiokuDB->connect( "couchdb:uri=http://127.0.0.1:5984/database" ); Note that this is the slowest backend of all for reading data, due to the latency in communicating with CouchDB over HTTP. Since CouchDB supports atomicity by using optimistic concurrency locking transactions are be implemented by deferring all operations until the final commit. This means transactions are memory bound so if you are inserting or modifying lots of data it might be wise to break it down to smaller transactions. An AnyEvent::CouchDB::Database instance. Whether or not to try and create the database on instantiaton. Defaults to false. Yuval Kogman <[email protected]> Copyright (c) 2008, 2009 Yuval Kogman, Infinity Interactive. All rights reserved This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945497.22/warc/CC-MAIN-20180422061121-20180422081121-00330.warc.gz
CC-MAIN-2018-17
897
10
https://www.fi.freelancer.com/job-search/integrate-paypal-api/
code
Please read the request carefully. This work is only for proficient developers of API and BigCommerce. have you coded API's before? I have this API that needs to be coded into BC Please see the API documentation below: [kirjaudu nähdäksesi URL:n] Core Integration Components Retrieve orders from BigCommerce (once the payment has been captured) Hi Developer I am looking for someone who have done JAVA (or others) api development to any USA payroll software company. Project Goal: to integrate our HR Software to any Payroll software company . You write the code.. you help us connect. We want to send Employee records, Timesheet and Vacation tracking info to the Payroll software. ...will allow us to streamline managing, monitoring, editing and even creating their business listings. We are looking for a very experienced API developer that is knowledgeable with Google API, Bing API, Yelp API and other sites like them. Please feel free to message me if you have anymore questions. We are open to price negotiation for this particular We need to build out a payment structure for our shopping cart that will work with Paypal Parallel Payments. We currently do not use paypal so this would be from scratch. Please have experience with parallel payments if you are bidding on this project. The buyer should see a single shopping cart and total but on the back end the payment gets split up Tengo un proveedor que me entrega su catálogo a través de una api en php y xml. (existencias, modelos, etc). Requiero una pagina web conectada a la api de mi proveedor, donde mis vendedores puedan teclear un artículo y visualizar la existencia y precio. En infraestructura no hay problema, cuento con un servidor virtual con php y iis funcionando correctamente ...for an Article Spinner API development as my IP. Acceptance Criteria : 1. Should match output for the likes of [kirjaudu nähdäksesi URL:n] or Spinner Chief 2. Should defy any plagiarism test. We use Grammarly to test it. 3. Should be human readable and English Grammar ready (10% errors only accepted) 4. Enable a website for selling this API We need it quick and ...future on several special woocommerce tasks. We are planning to connect our woocommerce store via rest api to a service, who convert it to csv. Since we have some custom plugins like renting products, bundles and so on, the given values of the api are somehow confusing and to be honest, we are for sure no experts. Apart from this, we have several Provide api status monitoring. Many services provide API monitoring. Looking for someone to configure SaaS to do api monitoring must configure automated task that check the health of an api and setup up notification. Must be familiar with the service recommended. Extend an existing .net core OData REST API by implementing/integration a calDav service endpoints for calendar/scheduler data, which is part of the existing infrastructure. Lightweight in c#/.net core goals: providing the calDav service endpoints for mobile device synchronization for the existing scheduler/calendar data. the calDav services needs I need to integrate lipapay payment gateway to sell products on my website. The gateway credentials are live so the job should be relatively fast for someone conversant with woocommerce and wordpress. Paypal payment gateway system working properly in sandox box mode and updating return url IPN data in database table but in live mode it's not updating return url IPN data in database table even though it is successful send through IPN History. Note : looking for developer from outside India who has his own paypal account for testing and debug issue ...31","group"=>"2") , array("zipcode"=>"07945","group"=>"2") , array("zipcode"=>"07836","group"=>"2") , array("zipcode"=>"07840","group"=>"1")); The other input is my google api key $GOOGLE_API_KEY 2) I need the output to be... I want this API intergrated on my website [kirjaudu nähdäksesi URL:n] so users can try trading without risking capital. Should be a rather easy implention you´ll have to write some interface code Budget $30 Hello, i need someone how can help to integrate API of powerbank and station. I would need an API created from IBM Bluemix. It uses the compare and comply tool. I would need to be able to upload a PDF document from my website to the tool then the results of the analysis would need to be filtered into a predefined template and shown to the user for download. We are looking for an individual to create a cross platform chat program that will also integrate through an API with our SMS provider. We will be adding an android & IOS endpoints to it at a later time, so the application will have to be designed with that in mind. We have a basic prototype designed here. We are open to the idea of using an open sourced create a nice clean web page where I can store 2 fields name and phone number store it in database, add buttons to add, update and delete, and then pull the info by API request. requires a login for the site ...settings, will take up to 10 days - vendor panel to manage all of vendor details as show in requirements, will take up to 10 days - mobile API, will take up to 10 days - Testing Main task is to fix issues in API with Mobile developer for 15 days. You do not have too much tasks. I would like to discuss my issues with you in more detail via Chat. My budget We need help for coding API Newegg, Walmart, Amazon marketplaces on VC# preferable or VB. Our product listings are stored on MySQL database. We need to synchronize the products inventory quantity and price from MySQL database with Newegg, Walmart, Amazon marketplaces. Please do not offer sync information as a excel or CSV. We need to have an ability ...following requisites: - Have extensive knowledge of Android’s Architecture; - Know how to build/compile Android from source to create a custom android image that works with API 22 and above; - Know how to work with apktool by decompiling and studying required apps inner-workings as well as any other tools that can achieve injection of methods; - Be I would like to integrate an online recharging API into my shopify website for allowing the end customer to recharge [kirjaudu nähdäksesi URL:n] same is to be integrated into my shopify website. ...basic customer import for our mail list but doesnt connect to the screenshot features which MailChimp provide here's the information from MailChimp apparently, we need A custom API 3.0 made [kirjaudu nähdäksesi URL:n] Our website is made from OSCommerce With the SMS Retriever API, you can perform SMS-based user verification in your app automatically, without requiring the user to manually type verification codes, and without requiring any extra app permissions. This will replace asking for phone permission to read SMS for OTP We need to integrate this in our App Hi, Quotation for below; My aim is to achieve an API setup which can connect to an already existing piece of software's database - MYSQL. I want the API to be used on external websites which connects via a user account's API key to the existing database for users to create a quotation. All I need is the creation of the secure login and after login I want to build a project based on Api using Python with Django web framework. Read data from other websites either by api or scraping To present the price comparison, the project will be discussed with the Freelancer Application Control Panel A Simple PHP Page with an option to user to pay either with [kirjaudu nähdäksesi URL:n] or NMI [kirjaudu nähdäksesi URL:n] [kirjaudu nähdäksesi URL:n] I wont be able to provide live credentials - so just send me code.. Super urgent - want it done in 3-4 hours...so only experienced shall apply Regards PS : Approx I need expert for Voice API & SDK | VoIP Calling In Your Apps | Sinch and Java Script only. Experience required: Node js and CI ...Synergy Wholesale API Documentation (Domain Names & DNS only): [kirjaudu nähdäksesi URL:n] Customer Ordering: - Allow new/existing customers to order and transfer domain registrations - For AU domains request ABN/ACN information - Submit new orders via the Syngergy Wholesale API once payment is I have a WordPress website that is mobile-friendly and works on all screens. I want to create a iOS & Andr...and works on all screens. I want to create a iOS & Android mobile application for that website but I don't want to create it from scratch so I need someone to do it through API. I also need to design native side menu and tabs on my application. I have an Angular app (Node 7) and a website in Bootstrap. I need to integrate a "single page" from the bootstrap website into the Angular app. Please share your LinkedIn profile to be considered for the project. Expect your bid amount to be fairly accurate. You can expect more work going forward and possibly a full time role if you are based in India Hi Everyone, We have signed up to a loyalty card system with [kirjaudu nähdäksesi URL:n] and have been given access to their API at [kirjaudu nähdäksesi URL:n] Loyalty Card We would like the user to be able to register for a loyalty card and check the current balance of the loyalty card. Thanks Richard I have a chatbot server hosted on my company's intranet. I need to integrate the same chatbot APIs with my company's sharepoint website which are running also on the same intranet. Need to develop...website in Laravel. All the product will be fetched from another website vis APIs, the user will be redirected to the website to buy the product. We will get the response via API. Means the communication with own website to others website will via APIs. The skill required: Laravel, Bootstrap, Jquery, APIs, database, and web services ...knowledge of API integration to website. You will integrate the following API to an existing website. You will also need to load some products on the website. APIs: [kirjaudu nähdäksesi URL:n] [kirjaudu nähdäksesi URL:n] [kirjaudu nähdäksesi URL:n] Skill set required:
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071716-00369.warc.gz
CC-MAIN-2018-51
10,018
34
https://apiacademy.co/2016/05/when-good-api-design-is-a-waste-of-time/
code
The idea that good design is essential to building great products and services has become a truism in our industry. Most of us intuitively understand the idea that expending effort on the design of our code, system architecture and APIs will payoff after implementation. I’m certainly a big believer in the power of good design for the API space, but I wanted to explore a situation where a design focus might not be necessary. Consider the case of a small business offering a cloud-based service product for a niche market. If this business chose to invest in a well-designed, developer-centric API, at a minimum they could expect: - A reduced learning curve for developers consuming the interface - A reduction in troubleshooting time - Increased interest from their developer community For most audiences, these are goals worth achieving. Indeed, this is why we emphasize good design for APIs in the first place – the benefits fit remarkably well with the main reasons for embarking on an API strategy: reduced cost and increased adoption. But, in my conversations with connectivity professionals from larger organizations, it is apparent that not all service vendors see the value in investing in this type of design effort. Developers and architects are bursting with tales of forced integration with service providers who have simply thrown an ugly or barely functioning interface on top of core component. It is in these scenarios that we hear about laughable attempts at implement whichever API styles and features are in fashion. ‘REST’ and ‘security’ become sales-worthy buzzwords that don’t live up to their promise when developers get their hands on the actual interface when real project work commences. In the majority of these cases, technical teams have very little say during the procurement process of outsourced and cloud-based services. In effect, these API providers don’t need to design for their developer audience because they aren’t critical to succeeding. For many years a sound strategy for selling cloud-based products has been to sidestep technical teams and engage directly with the business. It’s frustrating that technology teams are often still left with the responsibility for reducing integration costs regardless of the lack of sophistication in the APIs that they are tasked with connecting to. Thankfully, the wealth of knowledge and connectivity products in the enterprise space allows these teams to reduce the impact of bad design on the overall project and organization. Components such as API proxies can be used not only to build a facade for APIs that are being exposed, but also to provide abstraction for services that are being consumed. In essence, the design burden can shift from the service provider, to the enterprise developer who wraps a poorly designed interface in a more consumable, developer-friendly API for the rest of the organization to use. As a whole, the scenario makes sense. Well designed products are based in part a designer’s empathy for their users. Good design involves perceiving a product from a user’s view point along with an understanding of the impact that design decisions will have on the user base. However, an organization that builds an API as an afterthought for developers, who are only viewed as a means to an end, will likely produce a poor API. Ultimately, building a technology business based on developer apathy is a bad idea. The industry shift towards API product based integration is empowering technology teams at all levels and services that continue to ignore the needs of their developers will eventually be ousted from the market. In short, good design is only a waste of time when you don’t care about your users. If in fact you do care about the developers who will be using your API product, then you need to invest in designing your API with their point of view in mind.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00356.warc.gz
CC-MAIN-2021-49
3,899
11
http://wireless.sys-con.com/node/3075368
code
|By David H Deans|| |October 16, 2014 08:00 AM EDT|| May 22, 2014 If you're an executive that's concerned about the high-cost of proprietary software, you're not alone. If your IT team pushes back whenever Line of Business leaders ask them for feature enhancements that goes beyond the limits of the commercial software packages they've licensed, then you already know that frustration. Are you wondering if there's an alternative to this legacy business technology scenario? Consider Open Source Software (OSS) and follow in the footsteps of the previously enlightened. North Bridge Venture Partners recently announced the results of the eighth annual investigation into OSS trends. The latest market study findings point toward the increased role that OSS solutions have in today's enterprises. "Open source is enjoying a proliferation that starts with a growing number of new developers at the grass roots. Many then go on to join enterprises who themselves are engaging in open source projects," said Michael Skok, general partner at North Bridge Venture Partners. The authors of the survey say that respondents continue to share insights that demonstrate how open source is transforming the software landscape -- as the inherent quality, functionality, and increasingly ease of deployment creates a powerful gravitational pull across all vertical industry categories. The most exciting forward-looking applications, such as Open Hybrid Cloud services, have an open source foundation at their core. Other leading areas -- such as big data, content management and enterprise mobility -- are being positively advanced by the open source model of applications development. Exploring the Emerging OSS Market Momentum Compelling survey responses have highlighted the democratization and proliferation of open source in three broad areas of strategic and tactical impact -- new people, new technologies and new economics. Impact on New People Survey results uncover the growth of first-time developers participating in the open source community, and point to both new open source education initiatives and the prevalence of open source-based educational platforms. In addition, the survey reveals the three industries expected to be impacted most by OSS are education (76 percent), government (67 percent), and health care (45 percent). Results also demonstrate how embedded OSS has become in our social fabric. Respondents reported the top ten areas OSS will impact our everyday lives including: Education; Mobility; Web privacy/security; Home appliance; Wearable devices; Robotics; Entertainment; Automotive; Gaming and Monetary exchange/payments. Impact on New Technologies Open source has long been touted as the foundation for new technological innovations, and as OSS projects grow, so, too, do these new technologies. As data from Black Duck shows, with nearly one million open source projects to date, the rate of innovation spurs new technologies such as the Internet of Things (IoT) and the continued rise of Software as a Service (SaaS). When asked what industries OSS technology was leading, 63 percent cited cloud computing or virtualization as the key area where developers have turned to OSS. In addition, 57 percent answered content management, 52 percent selected mobile technology, and 51 percent answered security. Impact on New Economics 56 percent of corporations expect to contribute to more open source projects in 2014, signaling a change in the way enterprises view open source. When asked why they engaged with OSS communities, cost reduction was still the top response (61 percent), but 45 percent of corporations responded that they also did so to gain competitive advantage. For companies with over 1,000 employees, influencing a project’s direction was the third most popular answer. Finding and recruiting talent fell from the number two reason to engage with communities in 2013 to the number five answer this year, with only 37 percent choosing that as the top reason. This may be the result of OSS experience becoming a price-of-entry rather than a distinguishing factor. Additional Findings from the latest study include: - 72 percent of respondents chose to use OSS because of it provides stronger security than proprietary solutions, signaling a growing awareness that the proper management and use of OSS actually provides an even more secure environment than proprietary solutions. Building upon this, 80 percent of respondents reported choosing open source because of its quality over proprietary alternatives. - 68 percent of respondents said that OSS helped improve efficiency and lower costs, and 55 percent also indicated that OSS helped create new products and services, further supporting the idea of OSS as both an entrenched and a strategic element of today’s enterprises. - 50 percent of enterprises report openly contributing to and adopting open source, signaling a shift in the way organizations view the value of and their role in making contributions to the community. More than 1,200 industry influencers took this year's survey, answering questions about OSS trends, opportunities, key drivers of open source adoption, community engagement and the business problems OSS solves -- both now and in the foreseeable future. "Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Dec. 10, 2016 09:15 PM EST Reads: 1,353 Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor... Dec. 10, 2016 07:30 PM EST Reads: 1,816 Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe... Dec. 10, 2016 07:00 PM EST Reads: 4,152 What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ... Dec. 10, 2016 06:45 PM EST Reads: 1,104 SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im... Dec. 10, 2016 06:30 PM EST Reads: 1,031 The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can ... Dec. 10, 2016 06:30 PM EST Reads: 1,549 You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of... Dec. 10, 2016 06:30 PM EST Reads: 1,971 More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar... Dec. 10, 2016 06:15 PM EST Reads: 1,095 "ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Dec. 10, 2016 05:30 PM EST Reads: 1,010 Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech. Dec. 10, 2016 05:15 PM EST Reads: 2,383 Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr... Dec. 10, 2016 05:15 PM EST Reads: 1,425 WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC. Dec. 10, 2016 04:30 PM EST Reads: 1,840 As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter... Dec. 10, 2016 04:30 PM EST Reads: 1,930 "At ROHA we develop an app called Catcha. It was developed after we spent a year meeting with, talking to, interacting with senior citizens watching them use their smartphones and talking to them about how they use their smartphones so we could get to know their smartphone behavior," explained Dave Woods, Chief Innovation Officer at ROHA, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Dec. 10, 2016 03:30 PM EST Reads: 947 SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud... Dec. 10, 2016 01:45 PM EST Reads: 781 Video experiences should be unique and exciting! But that doesn’t mean you need to patch all the pieces yourself. Users demand rich and engaging experiences and new ways to connect with you. But creating robust video applications at scale can be complicated, time-consuming and expensive. In his session at @ThingsExpo, Zohar Babin, Vice President of Platform, Ecosystem and Community at Kaltura, discussed how VPaaS enables you to move fast, creating scalable video experiences that reach your aud... Dec. 10, 2016 01:00 PM EST Reads: 446 With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p... Dec. 10, 2016 12:45 PM EST Reads: 2,176 In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett... Dec. 10, 2016 12:15 PM EST Reads: 7,530 DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @... Dec. 10, 2016 12:00 PM EST Reads: 1,146 The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ... Dec. 10, 2016 12:00 PM EST Reads: 2,435
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544097.11/warc/CC-MAIN-20161202170904-00134-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
15,662
67
https://www.adobepress.com/articles/article.asp?p=1247262&seqNum=2
code
Working with Multiple Document Windows You can have more than one document window open at a time. Here, you'll create a second window so that as you work, you can see two different views of the same document simultaneously. - Choose Window > Arrange > New Window. Now you have the same document open in two windows. (Notice that InDesign displays :2 after the name of the second document.) - In Mac OS, choose Window > Arrange > Tile to display the windows side by side. - Select the Zoom tool in the Tools panel. - In the window on the left, draw a marquee selection around some of the text to zoom in on the artwork. (In Figure 11, we've selected the white box containing the headline Operative Words.) Notice that the window at the right doesn't change magnification. This configuration lets you see how any changes you make in the selection affect the rest of the layout. - Choose Window > Arrange > Consolidate All Windows. This action creates a tab for each window (below the Control panel), as shown in Figure 12. Click the tabs to control which document window displays. - Close the :2 window by clicking the Close Window button (the X) on the window's tab. The original document window remains open. - In Mac OS, resize and reposition the remaining window by clicking the Maximize button at the top of the document window.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00559.warc.gz
CC-MAIN-2021-21
1,331
9
https://community.microfocus.com/img/bandr/f/itrc-251/408492/dp9-failover-to-dr-site
code
I have been tasked with designing and performing a DR test. I have 2 DP CM and 2 StoreOnce 4500s. I am trying to figure out the best way to have the DR CM take over and keep backups rolling. I am sure someone has set this up and done. I am not looking for details (at this point) but maybe a nice step-by-step guide.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00633.warc.gz
CC-MAIN-2021-31
316
1
http://www.cvtips.com/career_advice_forum/threads/5023-Analyst-or-Software-engineer
code
I did my graduation in computer science, I worked in text analytic project for 1.5 years after that they closed the team and my work role is shifted to analyst team. Now I am working as market research analyst for the past 6 months. I do enjoy both analytics as well as technical works. Being a computers graduate will I get good career opportunities in analytics field? Shall I shift back to technical works or what do people do if u are in my place?
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00040-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
451
1
https://www.raspberrypi.org/forums/viewtopic.php?p=708955
code
I am trying compile and install Apache from source (2.4.12). I am trying to install to /Splunk dir as user splunk. Code: Select all [splunk@xxxxxxxxxxxxx /]$ ls -la | grep Splunk drwxrwxr-x 5 splunk splunk 9 Mar 2 06:04 Splunk [splunk@xxxxxxxxxxxxx /]$ I downloaded apr-1.5.1.tar.gz & apr-util-1.5.4.tar.gz and extracted the contents to /Splunk/httpd-2.4.12/srsclib/apr and /Splunk/httpd-2.4.12/srsclib/apr-util respectively. From /Splunk/httpd-2.4.12 I ran ./configure --with-included-apr --prefix=/Splunk/apache and it completed without errors. From /Splunk/httpd-2.4.12 I ran make and it completed without errors. From /Splunk/httpd-2.4.12 I am running make install and it ends with errors. Can any one help please?make: Leaving directory `/Splunk/httpd-2.4.12/support' cp: preserving permissions for `/Splunk/apache/modules/httpd.exp': Operation not supported make: *** [install] Error 1 make: Leaving directory `/Splunk/httpd-2.4.12/support' make: *** [install-recursive] Error 1
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00018.warc.gz
CC-MAIN-2020-40
984
12
https://www.internet-khazana.com/blog/2013/09/18/download-internet-explorer-11/
code
Microsoft is rapidly trying to save its reputation of browsers by introducing some cool versions 9 and 10. Although still these browsers are not use by many as they were in popular before Mozilla came into this field but the latest version of Microsoft Internet Explorer 11 proved to change the way we use internet. Again basically the latest version is designed for touch devices but it will perform better on your computer as well. Full screen browsing with HTML5 rendering capabilities you will see amazing results and can enjoy full browsing experience. Microsoft says that the latest browser will require 1 gigahertz (GHz) 32-bit (x86) or 64-bit (x64) processor. Also this version will require Windows 7 or Windows 8 operating system so if you have earlier versions than you need to upgrade your system.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817437.99/warc/CC-MAIN-20180225205820-20180225225820-00271.warc.gz
CC-MAIN-2018-09
808
2
https://aws-observability.github.io/observability-best-practices/tools/slos/
code
Service Level Objectives (SLOs)¶ Are highly available and resilient applications an active business driver for your company? If the answer is ‘yes’, continue reading. Failures are a given and everything will eventually fail over time! This becomes an even more important lesson when you are building applications that need to scale. Here comes the importance of SLOs. SLOs measure an agreed-upon target for service availability based on critical end-user journeys. That agreed-upon target should be crafted around what matters to your customer / end-user. To build such a resilient eco-system, you should measure performance objectively and report reliability accurately using meaningful, realistic, and actionable SLOs. Now, let us get familiarized with key service level terminologies. Service Level Terminology¶ SLI is service level indicator: a carefully defined quantitative measure of some aspect of the level of service that is provided. SLO is service level objective: a target value or range of values for a service level that is measured by an SLI, over a period of time. SLA is service level agreement: an agreement with your customers that includes consequences of missing the SLOs they contain. The following diagram illustrates that SLA is a ‘promise/agreement’, SLO is a ‘goal/target value’, and SLI is a measurement of ‘how did the service do?’. Is there an AWS tool to monitor all of this?¶ The answer is ‘yes’! Amazon CloudWatch Application Signals is a new capability that makes it easy to automatically instrument and operate applications on AWS. Application Signals instruments your applications on AWS so that you can monitor the health of your application and track performance against your business objectives. Application Signals provides you with a unified, application-centric view of your applications, services, and dependencies, and helps you monitor and triage application health. Application Signals is supported and tested on Amazon EKS, Amazon ECS, and Amazon EC2 and at the time of writing this, it supports only Java applications! Application Signals helps you set SLOs on your key performance metrics. You can use Application Signals to create service level objectives for the services for your critical business operations. By creating SLOs on these services, you will be able to track them on the SLO dashboard, giving you an at-a-glance view of your most important operations. To speed up root cause identification, Application Signals provides a comprehensive view of application performance, integrating additional performance signals from CloudWatch Synthetics, which monitors critical APIs and user interactions, and CloudWatch RUM, which monitors real user performance. Application Signals automatically collects latency and availability metrics for every service and operation that it discovers, and these metrics are often ideal to use as SLIs. At the same time, Application Signals gives you the flexibility to use any CloudWatch metric or metric expression as an SLI! Application Signals automatically instruments applications based on best practices for application performance and correlates telemetry across metrics, traces, logs, real user monitoring, and synthetic monitoring for applications running on Amazon EKS. Read this blog for more details. Check this blog to learn how to set up an SLO in CloudWatch Application Signals to monitor the reliability of a service. Observability is a foundational element for establishing a reliable service, thereby putting your organization well on its way to operating effectively at scale. We believe, Amazon CloudWatch Application Signals will be an awesome tool to help you achieve that goal.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817112.71/warc/CC-MAIN-20240416222403-20240417012403-00041.warc.gz
CC-MAIN-2024-18
3,713
18
https://www.mountainradiofm.com/getresponse-500-error/
code
Discovering the perfect e-mail advertising tool is difficult. There are numerous components to take into consideration. How easy it is to use, features, and also obviously cost. GetResponse is among the most effective email advertising and marketing tools around as well as is preferred. But is it right for you? In our GetResponse evaluation, I’ll take you via all the information of the software application as well as conserve you a great deal of time evaluating it. GetResponse email advertising and marketing software program summary - GetResponse is a powerful email advertising tool with smart automation attributes. - Establishing projects is incredibly easy. GetResponse helps you at each action of the means. - Lots of templates readily available for emails, touchdown funnels, types, and pages . Although some can do with an upgrade. - I enjoyed the very easy combination with analytics and also various other software application. - Rates is competitive and also it’s very easy to scale as your business expands. As a well-shaped email marketing and also automation device, we recommend you have a look at GetResponse. GetResponse’s crucial attributes: GetResponse is a full highlighted email advertising and marketing as well as automation tool with all the nuts and bolts you would certainly anticipate. Here are GetResponse’s most important attributes: Drop as well as drag e-mail editor: Creating advertising and marketing emails and newsletters is simple. - Drag content blocks on the canvas and edit them straight. Include special elements like photos, video and items to your e-mails. - Email layouts: GetResponse includes 120+ email layouts. You can conserve layouts for future usage, design in HTML, or develop your very own layout from square one. - Channel builder: Create full register as well as sales funnels. Funnels let you sell much more and also expand your email checklist. - Email automation: GetResponse teems with email automation functions. Create drip campaigns, triggered e-mails, and autoresponders. GetResponse automations have lead scoring to distinguish your hottest leads. - A/B testing: The backbone of every successful campaign is screening. In GetResponse you make use of an A/B testing wizard for e-mails and also landing web pages, and also immediately utilize the very best variation as the victor. - Landing web pages: Build touchdown web pages with a drag and decline editor. GetResponse’s landing web page editor can add types, videos.The web pages are hosted by GetResponse, so you don’t require different holding. - Customer support: GetResponse has a responsive support group. There is a knowledge base, a assistance facility, and 24/7 email and conversation support. - Combinations: Sync your GetResponse account with any type of CRM or eCommerce platform. You can make custom-made assimilations with the GetResponse API or Zapier. GetResponse Review: the information Currently to the part you’ve been waiting on. We’re going to evaluate the core GetResponse features and see if they’re the right fit for you. Included photo for Getresponse review 2022 Getresponse Email Marketing Designing your initial email is smooth with GetResponse They’ve thought about the customer experience. I was able to establish my first e-mail within 10 mins and it really felt really instinctive. Design your very first e-mail with GetResponse. As soon as you create a free account, you can directly begin creating your real email. Click on ‘Design message’, and you’re greeted by over 120+ email design templates Nice! getresponse email layouts. GetResponse has 7 classifications of e-mail design templates. For example, you can ‘educate, advertise, welcome’ your consumers. Say you like an email layout, press ” conserve” and also it turns up in ‘My layouts’ tab to recycle later. After you choose a template, you start personalizing your email in the e-mail building contractor. Getresponse email editor vs Hubspot evaluation 2022 The GetResponse email editor is loaded with functions, yet never frustrating. When you require them, all the modifying alternatives are shown. It’s all extremely user-friendly. Traditional HTML code enthusiasts can begin based upon an HTML design or code it from square one, but that isn’t required at all. It is great to know that there is additionally an alternative for custom-made HTML obstructs inside the drag and also drop template home builder. Drop & drag e-mail editor options In the format section, you determine the framework and also look of your email. Set up how many columns you want, standard shades, and so on. The main modifying is finished with drag and also decline content Blocks. You can include: - Images, Text and Buttons - Video clip (youtube-ish!). - Whitespace and also margins. - Social sharing web links and. - Customized HTML. My preferred part of the email editor is that you can save areas and blocks . I always make use of the exact same elements for headers, footers, as well as some text/images combos. So in my next e-mail, I conserve a lot of time by re-using them. Adding videos and also images is straightforward. You can either drag and also drop your image or browse from GetResponse’s free supply photos (!) collection. For videos you just get in the (Youtube) link and it’ll show up ingrained in the e-mail. The enjoyable doesn’t stop below. Under the eCommerce tab, you can add products into your email from your on-line shop. Provided you relate to your ecommerce platform like Shopify the products are right there. And also also add a ‘ suggested items’ section’ for that additional touch. Establishing your e-mail for success. On a single page, you create the entire email. Include subject lines, sender e-mail address, and also select the right e-mail list to send it to. getresponse getting going e-mail developer. The awesome feature of GetResponse email monitoring is it allows you track eCommerce communications:. ecommerce email tracking. You can track what your readers do after they click your email links. Click tracking deal with a GetResponse monitoring bit, however also deals with a Google Analytics assimilation. With eCommerce tracking you understand which projects drive revenue as well as sales. As well as can validate your advertising expenses. GetResponse Email Automation. GetResponse Autoresponders are the building blocks of automation. You create the e-mail similar to regular e-newsletter layouts, but after that trigger them to be sent as an autoresponder. GetResponse autoresponder email marketing. The organizing offers you manage over e-mail shipment. You can select to send out the email at the moment of signup, after a couple of hours, or at the exact time and also date of your preference. The attribute I like most below is the ‘time traveling’ toggle. You turn this choice on supply the email at the recipient’s local time. Allow’s dive into advertising automation includes a bit more. Advertising And Marketing Automation Tools. GetResponse beams in marketing automation. It is just one of the best SMB email advertising and marketing platforms, as a result of the automation process. GetResponse automation features. I loved GetResponse advertising and marketing automation when I opened the automation food selection and also I saw this navigation bar. Getresponse testimonial automation layouts. This menu looks like a online marketer’s desire. Each of these submenus have automation design templates to get you began. Since we are assessing, I am making a ‘welcome’ automation for you. Developing a client welcome automation. On develop an automation to welcome my brand-new consumers. I begin with a pre-made automation layout. And also after a few clicks I already have this:. getresponse welcome email automation circulation. I recognize, it looks excellent. When you focus on the automation flow, you understand GetResponse has done half the help you. It’s easy to change every little thing like you want it. The automation editor has conditions and activities . For example, when a customer purchases something from you (condition), you set off an e-mail ( activity). My first activity is sending a welcome e-mail. You can add as numerous e-mails and problems as you want. For this email campaign, I’ve produced a flow where my brand-new consumers obtain 2 emails in 2 days. For my comfort, I separate customers who clicked on my e-mail. When the campaign starts, GetResponse keeps an eye on all of this instantly. My score for GetResponse automations is 8,5/ 10. It takes a little bit of time to acquaint on your own with all the possibilities. As soon as you do, you have the power to build total customer trips. Create your own automations, build customer accounts and individualize emails. Getresponse automation operations. With GetResponse you can develop automation operations for:. - Lead credentials. - Interaction as well as retention. - Post-purchase alerts. - Abandoned cart triggers. - Webinars and online courses. - Sales promo, and also. - Associate advertising. These are simply some instances. With the pre-built automation templates, as well as a little bit of tweaking, you can construct your very own automations with accuracy. Enroll in GetResponse totally free below. Funnel building contractor. Onto the channel contractor. A channel is another method to explain all the action in a marketing campaign. That consists of kinds, emails, sms, touchdown pages and so on. GetResponse’s conversion funnel building contractor starts by asking if you want to:. 1. Develop your email checklist (or leads). 2. Sell products. 3. Advertise a webinar. getresponse testimonial funnel builder. Begin with a brand-new lead magnet or utilize one from the 17 layouts GetResponse deals. When you’ve picked your lead magnet, you can construct out your channel. The whole funnel structure procedure is assisted. You obtain pointers of what to do at each action as well as do not fail to remember anything. GetResponse for instance informs you not to fail to remember a thanks web page, as well as straight uses a thank you page design template. Smart! Just how to create a conversion channel. As your first step, you’ll produce a signup landing page. Choose the template, fine-tune your duplicate as well as style, and release the page. Then you’ll create athe thank-you page, complied with by a promo email. This is the email to start promoting your new made channel. getresponse testimonial funnel control panel. Advertise your funnel via Facebook advertisements. Link your Facebook account to GetResponse and it works. Clearly more kinds, emails and also web pages can crank up your funnel to max conversion. You obtain all your crucial statistics to check your project’s progress. Monitor signup rates, the variety of contacts, web page views as well as success rate from the conversion control panel. (they’re making it also simple!). With the Email Marketing strategy you can produce lead as well as lead magnet funnels. For even more as well as the deserted cart healing feature, you’ll have to go with higher strategies. I love that I can create a whole channel in one dashboard, all with the capability to track vital stats. It makes the GetResponse channel building contractor really feel simple and easy. Internet site Builder. GetResponse web site builder is their newest enhancement to the platform. GetResponse review website building contractor. You build a web site from themes or with their ‘Ai powered home builder’. With the Ai-powered building contractor, you respond to a couple of questions and GetResponse will immediately produce a customized site. Other internet site building options consist of:. - Widgets ( kinds, conversation boxes, cost tables, etc). - Web site shades and also styles. - Adding photos and logo designs . After fiddling with the building contractor for just 5 minutes, I was able to produce a suitable homepage with minimal initiative. The home builder lets you modify every little thing on the web page. You can change fonts, message dimensions, add/remove photos, reposition aspects, readjust padding and also more. There’s a different food selection for including and modifying web pages on the site. You can modify the navigating headers, bars, and footers . When you select a section, the drag and also decline editor shows more personalization. The menus look specifically like the email as well as landing web page editor. Webinars are so hot right now. GetResponse is just one of the few email advertising platforms that have webinars included. Establishing a webinar is easy. After picking your title, you include time, date as well as duration of the webinar. You select which contact checklist you’ll include your registrants to. You can add autoresponders for registrants to get right after they register. After every little thing is set to go, GetResponse will produce a webinar web link to drive individuals to sign up for your webinars. getresponse email marketing webinar control panel. Currently you can send out welcomes to your contact checklist, handle various other webinars, as well as keep an eye on webinar efficiency. I discovered the webinar device to have high quality. It’s got interactive functions which lets you keep the audience engaged. These consist of conversation, polls, Q&A and white boards. , if you want to show a function or item you can live-share your screen.. I can include a call to activity directly in the webinar if I’m looking to market a product. People who are already utilizing email advertising and marketing and webinars in different systems will certainly enjoy the reality that GetResponse brings them both with each other. You have accessibility to the chatpod as well as the international setups for your event. The event is hosted inside the GetResponse application. They also have a mobile application you can utilize to give your webinar on the move. Try GetResponse today. Landing Page Builder. The Getresponse touchdown web page home builder is consisted of for free in all strategies. The touchdown page home builder presently uses 198 design templates. To be honest, I would certainly stick to the most recent 100 layouts, some of the older ones … look actually old. GetResponse site design templates. After picking a touchdown page theme you can start modifying. The editing experience is various from the e-mail maker. The single-column food selection on the right has all the drag and decline elements like text, images, video, buttons, and so on. In the beginning, I believed I wanted the symbols to be labeled. After playing around a bit you’ll quickly understand what is what as well as get hold of the appropriate aspects. What I such as regarding the landing page builder is you can produce A/B variations of your landing web page from the beginning. The top left corner of the web page allows you develop as lots of versions as you desire, in addition to kinds and also thank-you pages. Touchdown web page arrangement. After you’re satisfied with your web page design. You can add SEO, URL as well as email checklist settings. What comes next is probably my preferred part. Just like your emails, you set up analytics and web occasion tracking for your touchdown web pages. You can select your analytics platform to track touchdown web pages. GetResponse incorporates with all of them: Google Analytics, Facebook Pixel, Kissmetrics, and so on. The touchdown web page building contractor does the job. Naturally, some specialized landing web page software application has even more elegant functions. If GetResponse can offer their building contractor a little a refresh and add even more design templates, they’ll get on par with other tools. Because you can link your touchdown pages to various other projects like automations, webinars, and funnels , I would certainly still stick with this GetResponse. Other GetResponse attributes that should have a mention are:. SMS advertising and marketing, Web push notices, Paid ads, and Livechat. GetResponse is powerful beyond e-mail advertising and marketing. To evaluate an email advertising and marketing system you have to look at client assistance and also rates strategies. GetResponse consumer support. You can get in touch with GetResponse assistance with online chat and email. Their online conversation support is readily available 24/7. GetResponse has a large aid center that covers all product-related inquiries you might have. getresponse aid facility testimonial. There’s a guide for newbies to get going. There are articles on how to use and set up individual functions. And also you find study to enhance your e-mail advertising and advertising automation. The GetResponse platform is offered in 26 languages. The only small drawback is that phone support is just readily available with the Max strategy. GetResponse pricing & strategies. GetResponse features 3 plans for smaller sized companies Email Marketing, Marketing Automation, and eCommerce Marketing. MAX and also MAX2 are their enterprise provides with advanced marketing features and also specialized assistance. GetResponse rates is based on the dimension of your e-mail listing as well as starts at $19 for the Email Marketing strategy. Here’s the break down of their expense for 1000 contacts. GetResponse pricing and intends dollar 2022 cost. If you choose a yearly subscription, you already obtain 18% off. This ends up being a tremendous 30% on the 2-year strategy. However we got you an added 10% discount off your GetResponse plan. Simply register via this unique link below and also get our GetResponse price cut. Attempt GetResponse completely free below. GetResponse provides over 170 integrations. They’ve organized them in a neat fashion on their integrations page. You can link your GetResponse account with:. - prominent eCommerce systems like Shopify. - settlement gateways. - Social media applications. - touchdown page as well as popup building contractors. - Conversion devices. Furthermore, with the GetResponse API your develop your own combination or make use of Zapier to connect. Getresponse advantages and disadvantages. Let’s pull out the weighing scales as well as contrast the benefits and drawbacks of GetResponse. What we like one of the most. - Perfect for scaling, supports both small as well as huge companies. - Touchdown pages and conversion funnel consisted of. - Webinar funnels. - Large collection of design templates. - Advanced advertising automation attributes. - One month complimentary trial without charge card. - 24/7 conversation as well as e-mail support. Can be better. - If you terminate an annual plan, no cash back. - The touchdown web page building contractor is good, however some layouts look obsoleted. Try GetResponse for free below. GetResponse e-mail advertising choices. There are obviously numerous GetResponse competitors. Just how does Getrespones compare to various other advertising tools? These are the main options:. SendinBlue is a great selection for little to tool businesses but not the best at the venture degree. It is affordable. Many users will certainly find the devices to be enough. Its features are limited compared to GetResponse. Both business offer a totally free strategy. Sendinblue is free for 300 emails a day, while GetResponse is cost-free for approximately 500 contacts. ActiveCampaign is a marketing automation service with included sales CRM. ActiveCampaign has extra powerful automation features and also comparable rates. ActiveCampaign provides you an Email, create, and also landing web page contractor, Sales CRM, Lead scoring, SMS, messaging and also chat . MailerLite is an affordable e-mail advertising and marketing device. Compared to GetResponse, Mailerlite is cheaper as well as uses good-looking themes. There are fewer attributes, for instance no webinars. GetResponse bottom line: Is it the appropriate suitable for you? Congratulations! You made it throughout of our testimonial. You comprehend just how essential it is to have the right e-mail advertising and marketing device. One that is powerful and fits your service requirements. GetResponse fits the bill for a effective as well as capable online marketing software program. Below’s exactly how we score it:. - Reduce of Use: 4.25/ 5. - Value for Money: 4.25/ 5. - Editor and design templates: 3.75/ 5. - Functionalities: 4.5/ 5. - Email Automation: 4/ 5. - Customer service: 4/ 5. Overall score: 4.1/ 5. GetResponse obtains a 4.1/ 5 for its interface, automation features, and also value for cash. The landing page contractor could do with an overhaul yet I’m certain GetResponse is currently on it. Obtain GetResponse completely free right here. Frequently Asked Questions (FAQs). Just how much does GetResponse cost? GetResponse has a free prepare for approximately 500 customers which includes unlimited e-mails. GetResponse prices starts from $19 for 1000 calls and also goes up to $119 for their eCommerce advertising and marketing plan. The cost of each strategy boosts as your client matter rises. Is GetResponse great for e-mail marketing? Yes, GetResponse is good for e-mail marketing. It’s a easy to use e-mail advertising and advertising and marketing automation software application. They provide an simple way to style and send out e-mail marketing messages. You can develop high-converting newsletters, autoresponders, automated funnels, and also much more. Getresponse is good for listing building, sales conversion, as well as cart abandonment campaigns. Does GetResponse have a CRM? Yes, GetResponse has a CRM on higher-tier plans. Also on lower-priced plans, you get contact administration functions like identifying, scoring, website and event monitoring, and automation attributes. If that isn’t sufficient, it likewise integrates with CRMs like Hubspot, SalesForce, Zoho, Microsoft Dynamics 365, and more. Where can I discover the GetResponse API key? You’ll find the GetResponse API key after visiting Tools > Integrations as well as API > API”. There you need to click on the “Generate API secret” switch, give it a name, click “Generate” and you’ll have your API secret. Exactly how does GetResponse contrast to Mailchimp? The main distinction is that GetResponse offers advanced advertising and marketing automation while Mailchimp is more well known for basic email marketing for the masses. When you obtain a bit of experience, you’ll delight in GetResponse more. Email automation: GetResponse is full of email automation features. GetResponse has 7 classifications of e-mail layouts. The GetResponse email editor is packed with features, yet never frustrating. GetResponse is powerful past email marketing. The primary difference is that GetResponse offers innovative advertising and marketing automation while Mailchimp is extra well understood for basic email advertising for the masses.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00291.warc.gz
CC-MAIN-2023-06
23,027
203
https://diydrones.com/forum/topics/gps-module-orientation-and-position?page=1&commentId=705844%3AComment%3A1322424&x=1
code
I am tidying up my hex, making room for my video transmitter. I am going to put the video transmitter on the top layer of my stack. My telemetry board and antenna are on the top layer as well. To avoid crowding, I was going to mount my GPS module on one of the hex arms, close to the main body. It still has an unobstructed view of the sky. Do you see any problems with moving the GPS receiver to this new position? Also, I did not notice any labels on the GPS indicating forward, can it be placed in any orientation? (In the photo, the GPS is mounted on the right arm) Ty, the GPS module has no specific orientation other than place it flat with a good view of the sky. The location you're showing will work, to an extent. As your hex rotates your central stack will block part of the GPS module's view of the sky and you'll likely lose whatever satellites happen to be in that position at that moment. If you plan on using 3.0 or better, it turns out that a solid GPS signal is a very big deal. 2.9 and less, not so much. Your worst case scenario is if you're flying in a forested area or in a canyon with a minimal number of satellites in view, the hex rotates losing a few and you lose GPS lock. If you can deal with this through the failsafe setup or are otherwise comfortable with this risk level then you are fine. Personally, I'd be more inclined to mount it on a raised stalk (like a Naza) if I absolutely had no choice but to mount it there. Yeah, good point. After I started looking at it more, your point about the hex rotating became clear. I guess I need to try to find a longer cable and try to mount it like an antenna. Thanks for the info... If you don't actually need to get it physically away from the video transmitter on your tower, you could just mount a vertical dowel or plastic rod with a standoff from one of your existing tower supports. That way you wouldn't need a new cable.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317130.77/warc/CC-MAIN-20190822130553-20190822152553-00288.warc.gz
CC-MAIN-2019-35
1,904
8
http://www.photopost.com/forum/1095708-post1.html
code
I just upgraded our gallery to the latest (5.0.2) version but now all gallery pages have a "Done, but with errors on the page." in the status bar. When I check the code the error is referring to it is pointing to some PP code (I am using the stock templates with vb3 integration). Check out my gallery at http://gallery.pimprig.com to see what I mean. Any ideas how to fix this?
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704253666/warc/CC-MAIN-20130516113733-00037-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
378
4
https://premium.wpmudev.org/forums/topic/trying-to-style-comments-plus-form/
code
I really tried to avoid posting this question because I think it shoud be easy to solve. But it's beaten me. I'm just trying to add some vertical space in the elements of the Comments Plus form. Specifically under the text: "Click on a tab to select hoe you'd like to leave your comment", and especially to move the blue comment submission button down 10px ( see screengrab). I'm in the comments-specific.css file, but it doesn't seem to like margin commands.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896374.33/warc/CC-MAIN-20200708031342-20200708061342-00480.warc.gz
CC-MAIN-2020-29
459
3
https://www.whyigaming.eu/news/clayton-from-hero-gaming-shares-the-importance-of-education/
code
Clayton from Hero Gaming shares the Importance of Education Clayton Farrugia is Lead Data Analyst at Hero Gaming, but Clayton had a long career before joining the iGaming industry, working in baking, software and telecomms. Thankfully, Clayton committed to improving his education constantly through related courses at MCAST and beyond. What do you do? I am the Lead Data Analyst for Hero Gaming. As an overview I cover managing the Data Analyst and Business Intelligence team, building reports and analysing data, assisting during Audits, presentations and building automated processes. What are the daily functions that the job requires? - Check if there are any issues that team members are facing. - Check that team tasks are aligned with business targets. - Discuss projects with stakeholders. - Investigate any issues through data. - Analyse brand and market performance. - Import data from third party API. - Build automated processes and build dynamic reports which let stakeholder change filters dynamically without the need of the Business Intelligence team. - Analyse attribution to check acquisition channel behaviours. - Build up weekly presentations from deep dive analysis. - Where applicable create machine learning models. How long have you been in the iGaming industry? I have been working in the iGaming industry for the past 5 years, with the last 4 years at Hero Gaming What did you study and where? I studied Computer Science at MCAST, following the programming line. I was a member of the first group of students that MCAST took in. From MCAST I took the following courses: - First Diploma in Computing - National Diploma in Computing Software Development - Higher National Diploma in Computing Software Development Then I decided to take a Computer Science Degree from the University of Hertfordshire. Even though I already had a degree, I followed different online courses (using Udemy and Data camp training platforms) covering various topics, including: - Tableau Courses - Python Courses - Machine Learning course Even though I am a hands-on manager, I believe that it is essential to never stop learning. That is why I keep myself updated by attending online conferences. Before covid, I attended the on-site Data Lead Summit and the Tableau Conference, and early on in my career I chose to pursue Microsoft qualifications on Programming and Dynamics Navision certifications. What was your career path, how did you get to where you are now? After finishing the Higher National Diploma in Computing Software Development, I started working full time as a c# developer at the Computime software company, where I was involved in several projects. During my second year there I decided to do a Computer Science Degree with the University of Hertfordshire in my free time. After four years I had the opportunity to work in Libya, installing and configuring Sun and vision tools at various companies in the region. Following this, I felt I needed to learn to work with data. This gave me the push to seek a new role in the Business Intelligence department at Computime. During my time in that department, we built a BI tool which assists banks to create specific reports that are sent to the central bank on a monthly basis, using Microsoft Stake to build these processes. Two years later I decided it was time for a change, so I started looking for a role within the telecommunications industry. This led me to securing a position with Melita as a Business Intelligence Developer, taking care of the data warehouse, building up ETL processes, building up reports and being part of the migration team. After two years, an opportunity came up in the iGaming industry. I moved to Mr Green as a Data Engineer. In this role my responsibilities were taking care of the data warehouse and integrating new 3rd party data within our data, and for the latter I needed to build ETL to connect with APIs. From this role new opportunities started presenting themselves, among which were Hero Gaming, who were looking to build a data team. This role offered the opportunity to learn on the job, as everything had to be built from the ground up. After two years building these systems and helping the company to become data driven, I was promoted to a lead position where I had the responsibility to manage people, and help the Data analyst team by providing useful insights from data. After a while I was responsible for both the Data analyst and Business Intelligence teams. My current responsibility is as a manager of these teams, managing different stakeholders, prioritising workloads, taking care of team members, and presenting projects. Was it a conscious decision that you wanted to be in the iGaming industry? Honestly, in the beginning of my career I wanted to learn a lot and was not aiming to work in a particular industry, but after having gained valuable experience in banking and telecom industries, I wanted to try iGaming too, so that I will have knowledge and experience from the top 3 industries in Malta. How did your qualifications help you? Five years ago, it was hard to enter the iGaming industry, even for someone like me with a lot of experience, especially technical experience of coding, visualisation and analysis. I went to several interviews but was not accepted, not because of qualifications or knowledge of tools, but because I hadn’t experience in the iGaming industry. As a business intelligence or data analyst, I do not believe you need to be coming from a particular industry to get started, as the most crucial assets that an experienced analyst needs are to be keen to learn the business, and to learn how the company operates. This can be learnt over time and with the help of other employees. Luckily, I had the good fortune to get interview with Mr Green and was accepted based on my qualifications, experience, and good references. Now when I look for new recruits for my team, I prioritise technical knowledge, an eagerness to learn, and that they are team players. What word of advice do you have for anyone with ambitions in iGaming? The first thing that a potential employee needs to do is to get familiar with the industry, by talking with people who already work in the industry and by subscribing to iGaming newsletters, to understand what is happening in the industry. Try to get experience by using different tools, taking courses such as those at MCAST, follow professionals and participate in online discussions. I recommend working in the iGaming industry as it enables you to work with people from different nationalities, with different mindsets. It is important to be motivated to learn, iGaming is a very fast paced environment where things change from one day to the next, so you have to think fast and be ready for change.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00378.warc.gz
CC-MAIN-2023-14
6,797
42
http://support.g5plus.net/forums/topic/unable-to-save-header-custom-text-content-and-theme-colours/
code
My website url: http://www.haulio.io Description about error: I am trying to update my Header background to be transparent when first loaded, but white when I scroll down. Have changed the Theme Colous but I still don’t see the changes. Kindly advise. Cannot reply in this topic, If you have any issue, please create new topic
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00455.warc.gz
CC-MAIN-2019-51
328
4
http://stackoverflow.com/questions/8064026/android-viewflipper-to-flip-pages-on-a-listview
code
I have a ListView in Android that I want to split in pages that fit the size of the screen. This is the code for listview xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="match_parent" android:orientation="horizontal" android:weightSum="1"> <ListView android:id="@android:id/list" android:layout_width="fill_parent" android:textFilterEnabled="true" android:layout_height="match_parent" > </ListView> </LinearLayout> I know that in order to use ViewFlipper you need to have as much views (ListViews in ths case) as you need inside 'ViewFlipper /ViewFlipper' Tags. Here's my problem: My list fills from SQL querys and you can filter it, so the list sometimes have 3 pages, sometimes have 10.... So my question is: Is there any way to dynamically generate another ListView to use ViewFlipper or... is there any way to modify the xml dinamically and add Listview tags depending on how many pages I need to show?
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00323-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
1,033
6
http://www.alertra.com/features/devmisc
code
Settings common to all device types. Abbreviation: An eight character name representing this device. This name is referenced in device alerts and reports, etc. and it appears in the status window in the upper-right corner of the Alertra website. Master Device: If the master device goes down, alerts for this device will be suppressed. The master device can be anything, but usually it's a router. If the internet connection is lost, or the router goes down, alerts will be suppressed for everything monitored behind the router. Notification Schedule: Manage notification schedules from the Devices page. Select the schedule to use for this device. Maintenance Schedule: Manage maintenance schedules from the Devices page. Select the schedule to use for this device. Check Frequency: The device will be checked according to the frequency specified. Alert Email Subjects Customize alert email subjects for each device. A distinct subject can be configured for device down, device ok and device warning alerts. If numeric paging will be used, give the device a unique 8 digit pager code. Numeric alerts for this device will contain the unique pager code, followed by one of these two-digit alert codes: 01 Device OK 02 Device Down 03 Device Warning If numeric paging is not needed, just leave the default pager code of 0. Caution: These settings can significantly affect monitoring behavior. Improper settings can result in false alarms. The default settings are appropriate for most applications. Timeout: Time to wait for a connection or to receive data. Retries: Number of times to retry from different locations after error. Retry Delay: Time to wait before retry from different location after error. Set up a page on a web server to execute some action, for example: reboot a server. Specify the URL required to initiate the action here. On Device Down, we will call the URL using HTTP GET and a timeout of 30 seconds. The result we receive from the call will be logged and can be viewed by clicking "Callback" from the Event Log. On Device Down alerts involving network connectivity issues, a traceroute will be initiated from one of the monitoring stations reporting the error. The results of the traceroute will be included in the alert e-mail.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701370254/warc/CC-MAIN-20130516104930-00020-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,250
27
https://www.trishtech.com/2015/07/firefox-interest-dashboard-shows-your-interests/
code
You keep browsing one website after another in your favorite Firefox web browser and spend many hours every day. But which of the websites consume most of your time? In other words, what are you actually interested in when surfing the wonderful world of the internet? A new extension called “Firefox Interest Dashboard” answers these questions for you in a graphical manner. This extensions analyzes your web browsing history of last 30 days and then finds out the things that you look for the most on the internet, the websites you visit the most and so on. It also recommends new topics and sites based on your interests. The “Firefox Interest Dashboard” extension for the Firefox browser does not require it to be restarted after the installation. So you can open the interests dashboard soon after the extension is installed, by clicking on the ID icon in the Firefox toolbar. If you do not have a history of last 30 days, then it complains about not enough data but shows the analysis in a graphical manner anyway. For complete and accurate analysis it requires at least 30 days of browsing history. You can see the top interest score, top site ranking, sites visited per day and time spent per day here. You can also open the interest dashboard by typing about:you in the address bar and pressing the Enter key. Scrolling down a little, it displays the top ten most visited websites based on the browsing history of your past 30 days. A rank of all of your interests categories and their intensity is also displayed. You can expand each of these categories to explore them further. Clicking on the small gear-like icon on the interest dashboard screen, you can open some of the options too. For example, you can choose to recompute the history (which would change the whole interests analysis), generate the debug report, and view the recommendations tab. On the recommendations tab, it displays all the active interests in form of interconnected branches. Clicking on one of the nodes of these interests will help you explore your visited or recommended sites further. This way you can actually know which types of sites you have been visiting and which ones you may want to visit. You can download the “Firefox Interest Dashboard” extension for the Firefox from https://addons.mozilla.org/en-US/firefox/addon/firefox-interest-dashboard/.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00193.warc.gz
CC-MAIN-2023-50
2,357
6
https://community.plotly.com/t/setting-opacity-for-unselected-points-outside-the-marker/32407
code
When I select a point in a scatter plot I want all the other unselected points to have a low opacity but without showing the density effect (i.e. non accumulative) In a normal plot setting the opacity outside the marker achieves this but unselected doesn’t have an opacity attribute. Is there any other way? Hi @sabri, welcome to the forum! I think what you want to decrease is not the opacity (which will let the other markers show through it), but the saturation of the color (as in HSV colors). You can pass plotly colors as hsv strings (eg 'hsv(0,100%,100%)') so you could just decrease the saturation of non-selected points. @sabri See a previous discussion on this forum https://community.plotly.com/t/selected-points/12596 and a link to a notebook on setting color of selected points and opacity of the non-selected ones. Thanks Emmanuelle, but hsv in my case is not different from rgb. This is basically what I am trying to achieve: I’ve done this by having two plots: a plot of the unselected points with opacity 0.1 (defined outside the marker) another plot with selected points drawn on top to avoid overlapping It is not efficient and a bit slower but that is what I can manage This is what happens when I use unselected.marker.opacity=0.1: The problem with hsv or rgb is that the selectedpoints are sometimes buried under the unselected ones and can sometimes be barely seen I am using the plots in Dash btw.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00147.warc.gz
CC-MAIN-2023-06
1,425
15
https://www.thirdtier.net/2012/10/2356/
code
The final part series looking at ClearOS, one of the major commercial alternatives to Small Business Server. In this post I present my conclusions. Be sure to read part one for an overview of ClearOS and part two for an introduction to the installation process. Part three looks at the domain and file sharing. Part Four covers messaging solutions. Part five covers backup and recovery. Part Six: Conclusions and what else you need to know. ClearOS, what do I think about ClearOS and would I recommend it to someone to install as a file and messaging solution. I like it. I like the product. I like what it brings forth as a solution. It provides an easy to setup, centralized administration environment for a small business. It scales, you could use the server to support a network of hundreds of users if you so desired. There is a large inventory of applications that you can install from the Marketplace to make this a true all in one solution. These applications include web filtering, gateway anti-virus, and a firewall product as well as the file sharing and messaging that I reviewed. The web administration interface is great way to administer the server. This is one area that Windows has been lacking in for years. Everything that I need to do on the server can be done from a central console that is accessible from a web browser. If the change I need to make cannot be done from the WebUi, SSH allows me quick access to the console of the server. Included in the centralized administration of users, groups, and computers, password polices can be defined and enforced on the clients. ClearOs is the easiest setup and implementation of OpenLdap I have seen, it does just work. Samba file share works as well across all versions of Windows. Zarafa is a compelling Exchange alternative, and its integration with the LDAP directory provides true single sign on for users. The largest hole I see in the ClearOS solution is in its backup and recovery. By default there is only a configuration backup from the WebUi, and no way to schedule it or send it to an external disk. This is a huge issue for small business that need a set and forget backup solution. Linux does not have a volume shadow copy service like Windows, so backing up open files can be difficult. Monitoring the server health is not intuitive, and where there are applications such as disk usage, it does not alert you if the disk is getting close to full. There are logfile monitoring tools but they only aggregate what is there. While the offering is complete, providing all that you need in a server for small business, the applications could use some further refinement. For example the ability to add a Public Folder store in Zarafa from the WebUi, or creating a way to move FlexShares to a different disk if you need to. The ultimate decision to install ClearOS is going to break down to three options; cost, feature set, and familiarity with Linux. If cost is an absolute hard line factor, ClearOS is cheaper than Microsoft solutions. Purchasing the bare minimum support contract makes it extremely cheap. If all you need is a centralized directory server and a messaging platform, ClearOs has the features you need, and many more. Familiarity with Linux is required to deploy Clear OS. You don’t need to be an expert, but the ability to SSH into the console and navigate the file system needs to be in your skill set. While there is no supported migration path from a Windows domain, it could be done by purchasing an ActiveDirectoy connector in the marketplace. Overall, ClearOS is a compelling option for a single server, on premises, solution.
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.24/warc/CC-MAIN-20161202170901-00399-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
3,630
5
https://www.sqlservercentral.com/Forums/Topic545821-146-1.aspx
code
Hi Did you find a solution to your problem. I am having the exact same problem. I am a newbie to SQL server 2008, but I did configure it to work the first time. After SP 1 2008 for sql server, I subsequently ran into problems and had to create a new instance. I have tried all possible settings in Database mail configuration utility. In parameters i increased the time out period, but still no luck The error message in the log file The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 2 (2009-04-30T12:25:17). Exception Message: Cannot send mails to mail server. (The operation has timed out.) I have configured sql server agent to use my database mail profile.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00506.warc.gz
CC-MAIN-2018-05
719
4
https://www.overclock.net/forum/69-nvidia/47397-winfox-won-t-read.html
code
I'm trying to use WinFox to mod my 6800LE's BIOS and in the BIOS flashing utility you have to get it to read your current VGA BIOS (I guess to make a backup) before you can flash it with new BIOS. Only prob is, mine won't read my BIOS. It loads the ROM file etc just fine, but it always fails to read my BIOS. I don't have the faintest idea why...I tried rebooting, reinstalling, etc and it still does it. Have you tryed using a different flash utility for copying the vga bois from your gfx card like nvflash 5.13 for example? If needed there is my bios flashing guide for gfx cards if you are stuck in a certain area or confused in the method.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00493.warc.gz
CC-MAIN-2019-43
645
3
http://www.ieee802.org/secmail/msg16596.html
code
Contributor list for 802 Overview and Architecture IEEE 802 Architecture list, hosted by 802.1: or email [email protected] for help. In the next draft, I am adding a list of contributors so as to include individuals who have contributed to the draft but are not members of the 802.1 WG (who are going to be included in another list in the front matter). So, if you feel that you have contributed to the draft and would like to see you name in the document, please send me an email with the preferred spelling of your name. Please do not send this response to the email lists. Send it directly to my email. Unsubscribe link: mailto:[email protected] IEEE. Fostering technological innovation and excellence for the benefit of humanity.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510866.52/warc/CC-MAIN-20181016180631-20181016202131-00019.warc.gz
CC-MAIN-2018-43
776
13
https://llllllll.co/t/topic/51402
code
i need time to properly check this out but i wanna thank you @rajaTheResidentAlien, i just see a readme and license in the smoke_n_mirrors github repository. are there other files that still need to be added? side note: what you played at yesterday’s flashcrash was splendid! usually, when i write lol, it is more like i’m laughing out loud inside myself. when i read your reply above, i really did laugh out loud, really.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00698.warc.gz
CC-MAIN-2022-40
426
5
https://www.erlang.com/topic/3-15660/
code
I am researching setting up a telephone answering service in the UK. I was under the impression that I would need a switch, many DDi’s and the software to handle different types of calls i.e. call outs, mail order, reception desk etc. together with an platform for integrating message delivery by sms, e-mail, fax etc. I was interested to read the thread related to using a ‘server’ based telephony rather than ‘switch’ based. Can someone please explain the difference between the two from both a practical and cost perspective?
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00055.warc.gz
CC-MAIN-2021-21
538
2
https://forums.t-nation.com/t/iphone-tethering-ps3/157475
code
Since your computer is picking up the signal, maybe run a cat5 from your ps3 to the computer? Or, not sure about this, you could right click the connection, select properties. Click the sharing tab, then click "Share this Connection". I don't know much on using it, but google might. Hope this helps at all.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00792.warc.gz
CC-MAIN-2017-51
307
2
https://www.fi.freelancer.com/job-search/make-a-statistics-site/
code
Looking for someone with a strong background in probabilities, statistics, and coding. The deadline for this project is 13 Nov 2018, 10:00 PM PST. The deadline cannot be changed. Phase 1 of a multi phase research/statistics/data analysis project, that will be base line for future research and data models Our client has mandated us to research and collate a subset of non related statistics and data for drafting of an executive summary in support of a specific business case for their marketing team. This is a relatively simple I'm looking for a serious person that would need to look at some kick boxing matches posted on youtube/ facebook, with the objective of extracting statistics for the punches and kicks that happened for each athlete within the viewed match. In short the targeted tasks are : 1. watch the kick boxing movie on one of the following sources : facebook/youtube/company DO NOT BID UNLESS YOU READ THE DETAILS AND YOU ARE SURE YOU CAN COMPLETE THIS TASK 100%. Read t...software but it must be readable. In order to do this, you need to have good knowledge of discrete mathematics/structures and algorithms. Before making a bid please be sure to read the details and make sure if you are capable of doing the task or not. Summary This project involves the design and development of a web site for use by Adrey Consultancy LLP. This project will follow the timeline outlined below and does not include ongoing maintenance of the site outside of what may be stated in the scope. Project Scope This SOW covers the following activities and deliverables. • Theme Design and Development: Hi all - I would like to hire a research assistant for hourly work. I cannot read Bahasa / Indonesian websites so would like someone who is fluent in both English and Bahasa to help me. Currently I need to understand population figures for each Kabupaten. Let me know if you are able to assist me in finding this dataset. Candidates in scientific writing are required to specialize in mathematics, mathematics statistics, experience and be required to be Egyptian or Arab nationality I have some work to do just that they can do just that they have a good idea about statisticians. Those who think that only this person can do this project. I'll say the rest in my inbox my budget 40$ thank you ...which is mostly above my head. I'm looking for a paid study partner who would be willing to review lists of questions I compile and then provide estimates of what he would charge to answer my questions. This would be piecework (Word files with questions). I cannot pay an hourly fee. You **must** have a strong personal interest in Gaussian processes Longterm data science support in R studio, Nvivo, tableau, Stata, SPSS, Matlab , weka, excel, python, hadoop, aws, powerbi required. When you bid attach samples of your work, your cv and list the skills you are expert in.. Hello I need help with a statistics assignments, 3 of the same assignment 800-1200 words, different data points and subjects, and hypothesis to test, pretty simple stuff, I'll give you an example of one that was already created. Hello, I need to have a Statistics page created for my client's website. For that you must have to have experience building graphs (Bar/pie/tabular). Please share any of your work where i can see intense work on graphs. Need help in a Machine learning project which requires statistics knowledge as well I need a help with a little project. Modeling the following problem and then programming in MiniZinc: More details will be shared via chat I have this quantitative case study and I'm looking to answer the questions below related to the case. ...quantitative case study and I'm looking to answer the questions below related to the case. Please see the attached file. Please contact me if you're familiar with quantitative statistics and able to help me understand and answer these questions. Hey Sreeraj, my name is Mikhl Stanley, I want to be a tech entrepreneur in my country. This project is my second venture. My first project failed because I didn’t have a reliable team. I’m doing things differently this time around and I want to offer you this project and a place as the lead tech administrator on my team if this project is successfully My client has hired me to make a web application to match his already made ridesharing mobile app. This web application is for long-distance driving trips. It is related to rideshare and for booking long trips similar to travel booking website but for cars only. My description: 1. Developing (1) Premium, Custom Web Application for ____________________ i want long term [kirjaudu nähdäksesi URL:n] you are expert, please bid [kirjaudu nähdäksesi URL:n] sample project is given. the budget for sample is low. if you do well, we will work more projects ...years and automated one of my most simple strategy to be a Robot (EA). From 500 USD initially, it generated a profit of 2,678 USD (total account is 3,178 USD) after 6 months. My question regards statistics: How can we be 95% sure this trading robot is systematically profitable and not a series of lucky trades? Is this time base measuring (after : Records pertaining to the monthly number of job-related injuries at an underground coal mine were being studied by a federal agency. The values for the past 100 months were as follows: Apply the chi-square test to these data to test the hypothesis that the underlying distribution is Poisson with mean 3.0. Again, let α = 0.05. I need a statistic in Excel which shows the following attributes: Username + Follower Number (sorted by follower Number, from high to low). This for total of 8 Instagram accounts A Account 24400 Followers B Account 30500 Followers C Account 171000 Followers D Account 89100 Followers E Account 26300 Followers F Account 18100 Followers G+H Account I'm looking for help in a research task that requires a good understanding of statistical analysis. I was asked to select two quantitative papers with different research designs and statistical analysis and then write a summary for each article with a special focus on the following question: 1) What type of quantitative design did the authors use? Hello, I would like to start writing a journal paper, we would like to start from the scratch to finalise the work, please don't waist my time, if you have no idea or background about statistics. Hope to hear from you.. If possible, I would like the following list of statistics. I know that many of these are publicly available BUT I do not have the time to do the discovery. The remainder, I would not know where to find: 1. Value of clothing and fashion industries in the US? 2. Net profit of top 3 brands in clothing and top 3 in fashion overall in the US? 3. Amount I need help with statistics and excel for my business. You have to be experienced with using excel to solve linear regression, statistics, and blackbox influence diagrams. You will also need to be experienced with data tables, vlookup, solver, and probability. I need this completed in 1 day. I will pay extra for this. Longterm data science support in R studio, Nvivo, tableau, SPSS, Matlab , weka, excel, python, hadoop, aws required. Amateurs are welcome to bid on this. Training will be provided.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00423.warc.gz
CC-MAIN-2018-47
7,267
26
http://artsmonkey.blogspot.com/2006/09/hypothetically-speaking.html
code
if one were to go to a random social gathering, and discover that one had slept with 3/4 of the heterosexual men attending.... would that make one a slut? even if there were only 4 heterosexual men in attendance? i don't want an answer.... i mean... one doesn't. i'm a little bit of everything. i'm a gemini - true to form. i'm a driven goal oriented person with too many interests to stay completely focussed - this makes me antsy much of the time. i'm calm when i am doing.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00602.warc.gz
CC-MAIN-2018-30
475
2
http://morethandogchildren.blogspot.com/2012/08/idump.html
code
This is a good representation of where I stand in Declan's world. It has been sooo nice here lately..especially in the morning. I've gotten some good reading time in! Gabriel is an honorary family member and Decs loves him. Here they were dancing: ...and here being super silly: I worked all day Saturday and Mark went to the Omaha Zoo w/ his parents and Decs: Up to trouble...
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00177.warc.gz
CC-MAIN-2018-30
377
7
https://geant4-forum.web.cern.ch/t/cannot-get-dielectric-dielectric-interface-to-reflect-optical-photons/1673
code
I’m running a Geant4 10.5 patch 01 simulation to test the effect of different coatings on scintillators. For the initial tests, I am using a general particle source (GPS) isotropically emitting 1 MeV electrons at the center of a 2.54 cm x 2.54 cm cylindrical NE213 scintillator. I am counting the number of optical photons that intersect one of the flat faces of the scintillator (where the PMT would be coupled). My code is a modified version of OpNovice from the Geant4 examples, which I have modified to allow particle-dependent scintillation yields and which counts scintillation and optical photons and records their properties at the PMT face. The simple geometry consists of the scintillator, completely surrounded by a cylindrical aluminum container with 2 mm thick walls, except for the one flat side where optical photons are counted. That side is made of glass. I am varying optical properties of the aluminum can to study the impact on the number of optical photons detected by the PMT. When I use a dielectric/metal interface with either Glisur or Unified model, things behave roughly as expected. If I set the container reflectivity to 0%, I get a minimum number of photons detected (173102), which roughly agrees with the number expected from geometry considerations (175083; the average of two simple geometry approximations 161250,188916). As reflectivity increases to 100%, the number of photons detected increases proportionally, as expected. Ground or polished finishes give the same result, and unified/backpainted apparently has 0% reflection. However, the dielectric/dielectric interface with either Glisur or Unified model yields slightly less than 0% reflected photons no matter what parameters I choose. I have tried all the combinations that I used for dielectric/metal plus I have tried varying the fractions of specular lobe to specular spike to backscatter. I have tried including the refractive index of the container. I used both a real refractive index (real part of that for Al and also that for air) and out of desperation a complex refractive index for aluminum (although I recognize that only real index is supposed to be used for dielectric/dielectric.) I have read all the G4 documentation including the Boundary Process Section in Physics Processes Book for Application Developers and all related papers that I can find. I have even gone through the G4OpBoundaryProcess.cc and .hh source code. But the dielectric/dielectric interface just does not reflect any optical photons. Am I missing something obvious? I have attached a screenshot of a spreadsheet of some results. Calculated and expected photon numbers are highlighted in orange for each run.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00713.warc.gz
CC-MAIN-2022-27
2,692
7
https://www.medicinalplantsarchive.us/plant-growth-2/protein-interaction-networks.html
code
Inferring signalling networks solely from transcriptomics data has several limitations. For example, the discrete nature of the data could limit the complexity of the networks that can be inferred. Moreover, transcriptomics data can provide only a limited picture of the actual physiological changes underlying a living organism. This has been clearly shown very recently, through proteomic analysis of Arabidopsis suffering biotic stress. Jones et al. (2006) have analyzed the alterations in the proteome of Arabidopsis leaves during responses to challenge by Pseudomonas syringae pv tomato DC3000 using two-dimensional gel electrophoresis. The abundance of each protein identified was compared with that of selected transcripts obtained from comparable GeneChip experiments (Truman et al. 2006). Changes were reported in total soluble protein, chloroplast-enriched, and mitochondria-enriched over four time points (1.5-6 h after inoculation). In total, 73 differential spots representing 52 unique proteins were successfully identified. Significantly, many of the changes in protein spot density occurred before transcriptional reprogramming. The high proportion of proteins represented by more than one spot indicated that many of the changes to the proteome can be attributed to post-transcriptional modifications. One further strength of this proteomic analysis was the ability to separate components of basal defence (by inclusion of the hrpA mutant; de Torres et al. 2003) from disease and resistance responses, DC3000, and DC3000 (avrRpml) inoculations. In recent years, large-scale protein-protein interaction data have become available for some model organisms, and such data have proven extremely useful for inferring gene regulatory networks. The effective integration of data from different sources appears to be one of the most important approaches for unravelling the cell dynamics. Unfortunately, protein-protein interaction data are still very limited for Arabidopsis. A promising approach for expanding a given dataset of protein-protein interaction is that of the "in silico" prediction of interactions from a set of genomic features using machine learning techniques. For example, Bayesian Networks (Jensen 1997) have been used to predict genome-wide protein-protein interactions in yeast by integrating information from different genomic features, ranging from co-expression relationships to similar phylogenetic profiles (Jansen et al. 2003; Lu et al. 2005). These results were particularly important because it was possible to show that at a certain level of sensitivity the predictions were more accurate than the existing high-throughput experimental dataset. On the other hand, when experimental data for a given organism are available, it is often necessary to combine experimental results in order to create an interaction network. In fact, when different techniques are used to identify protein interactions, the process of creating a unique protein-protein interaction network involves combining the results of separate experiments. Moreover, the problem can be complicated by the fact that the data may not to be directly comparable and is likely to have different amounts of noise. A technique that has been successfully applied to solve this problem involves using a machine learning algorithm to learn the parameters of a model that combines the different experimental results. In general, using a small set of well-known protein-protein interactions (a.k.a. gold standard), the system is trained to output a probability of a protein-protein interaction given the different experimental data. Recently, this method has been used for integrating the results of two (possibly repeated) purifications (matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI) and liquid chromatography tandem mass spectrometry (LCMS)) of 4,562 different tagged proteins of the yeast S. cerevisiae (Krogan et al. 2006). Using the hand-curated protein complexes in the MIPS (Munich Information Center for Protein Sequences) reference database (Mewes et al. 2006), a machine learning system was trained to assign a probability that each pairwise interaction is true based on experimental reproducibility and mass spectrometry scores from the relevant purifications. In this way, from the two "incomplete" graphs obtained using the LC-MS and MALDI technique it was possible to generate a single combined protein-protein interaction network for S. cere-visiae. Notice that the edges of this network are labelled with a number that is the probability of interaction between the two proteins they connect. In other words, the network is an undirected weighted graph in which individ ual proteins are nodes and the weight of the edge connecting two nodes is the probability that the interaction is correct. Interaction data are noisy, and therefore the protein-protein interaction networks obtained from them will contain many errors in the form of links which can be either missing or incorrect (von Mering et al. 2002). A very interesting question is whether it is possible to use the network topology to reduce the amount of noise in the experimental data that is, to "correct" some of the experimental errors. A positive answer to this question for PPI networks has been given recently by Paccanaro et al. (Paccanaro et al. 2005; Yu et al. 2006). The basic idea of the method derives from the way in which large-scale PPI experiments are carried out and particularly from the matrix model interpretation of their results (Bader and Hogue 2002). In these experiments, one protein (the bait), is used to pull out the set of proteins interacting with it (the preys) in the form of a list. When such lists differ only in a few elements, it is reasonable to assume that this is because of experimental errors, and the missing elements should therefore be added. Each list can be represented as a fully connected graph in which proteins occupy the nodes. Then the problem of identifying lists that differ in only a few elements is equivalent to finding a clique (a completely connected subgraph) in a graph with a few missing edges, which was named a "defective clique". Therefore the algorithm searches the network for defective cliques (i.e., nearly complete complexes of pairwise interacting proteins) and predicts the interactions that complete them. This method was shown to have a very good predictive performance, thus allowing the correction of many errors present in large-scale experiments. Once a network has been obtained, it can be used as a model to answer important biological questions. For example, it is well known that proteins carry out their function by interacting with other proteins and that they tend to act in complexes. Identifying these complexes is therefore a crucial step in understanding the cell dynamics and can give important clues to protein function. One way to identify such complexes is by identifying tight clusters in PPI networks. This approach has been recently used in (Krogan et al. 2006) to identify protein complexes in S. cerevisiae. Particularly, the Markov cluster algorithm (van Dongen 2000) (which simulates random walks within graphs) was used to identify highly connected modules within the global proteinprotein interaction network. The algorithm identified 547 protein complexes, about half of which were previously unknown. Finally, we would point out to a recent work which builds a slightly different type of network that has been used for function prediction. Some biological problems or data do not have a natural representation as networks. However, sometimes they can be remapped onto a network formalism and this representation can offer an efficient solution. An interesting case is represented by the problem of clustering protein sequences. Clustering protein sequences based on their evolutionary relationship is important for sequence annotation as structural and functional relationships can be potentially inferred. This problem can be easily mapped into that of clustering the nodes of a weighted undirected graph in which each node corresponds to a protein sequence and the weights on the edges correspond to a measure of distance between two sequences. The goal is to partition such a graph into a set of discrete clusters whose members are homologs. Recently, a method has been introduced for solving this problem that is based on spectral graph theory. Such method partitions the graph into clusters by considering the random walk formulation on the graph, and analyzing the perturbations to the stationary distribution of a Markov relaxation process. This is done by looking at the eigenvectors of the Markov transition matrix. A detailed explanation of the technique is beyond the scope of this review, and we refer the interested reader to the work of Paccanaro et al. (2003, 2006). When this algorithm was tested on difficult sets of proteins whose relationships were known from the SCOP database (Structural Classification of Proteins, http://scop.mrc-lmb.cam.ac.uk/scop/) the method correctly identified many of the family/superfamily relationships. Results obtained using this approach were much better than those obtained using other methods on the same datasets. On average, when quantifying the quality of the clusters using a measure that combines sensitivity and specificity, this approach showed improvements of 84% over hierarchical clustering (Everitt 1993), 34% over Connected Component Analysis (CCA) (similar to GeneRAGE; Enright and Ouzounis 2000) and 72% over another global method, TribeMCL (Enright et al. 2002). Was this article helpful?
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573561.45/warc/CC-MAIN-20190919163337-20190919185337-00476.warc.gz
CC-MAIN-2019-39
9,668
15
https://coderanch.com/t/202415/java/performance-testing-optimizeit
code
i want to use borland optimizeit tool for testing a j2ee application.my application requires lot of resources.i want to know for analysing that application how much resourses are required by optimizeit. if any one worked on that can u suggest me. [email protected] ideally if you have 1gb ram you should be able to start up your app-server through optimizeIT give me more details like which app server you are using and application size and maybe i can give you some other tips on the same. You think you know me .... You will never know me ... You know only what I let you know ... You are just a puppet ... --CMG
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510867.6/warc/CC-MAIN-20181016201314-20181016222814-00532.warc.gz
CC-MAIN-2018-43
627
3
https://community.cloudflare.com/t/dkim-selector2-is-not-working-resolving/607319
code
I am trying to implement Dmarc/DKIM. Email is by M365. Sent out via Barracuda. I have created the DKIM records as directed by MS. However when doing a lookup on selector2 it is not found. Both selector1 and selector2 have been created identically in Cloudflare but 1 resolves and 2 does not. Can anyone suggest a resolution. In Cloudflare you’ve created CNAMEs? I think I’ve seen it elsewhere before where Microsoft only generates one of the target records at their end. I recall rotating the DKIM keys in the Microsoft 365 settings may fix it so you could try that. If not, what is the domain name? thank you. had created the cnames in cloudflare. he issue was as you mentioned and once rotated the DKIM keys in the MS tenancy all was resolved. This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00274.warc.gz
CC-MAIN-2024-10
849
8
https://www.davesite.com/computers/windows7/windows_is_slow_make_windows_faster/checklist_make_windows_7_faster.shtml
code
This checklist is designed to be a quick overview of steps you can do relatively quickly to improve the speed of running Windows. It's not a guide to speeding up the boot up or restart times. It's also not a guide to removing viruses, malware, ransomeware, etc. If your Windows 7 PC is running slow but otherwise healthy, this checklist is a few things you can do to try to speed it up with relatively low risk of messing up your computer. That said, BACKUP your computer before doing anything in this checklist. Microsoft sets an end date for when they'll stop making security updates to a version of Windows, according to Microsoft's official website, you can get updates for Windows 7 until January 14, 2020. After that, you'll be on your own for security. Even so, you should still be using security software in addition to getting updates from Microsoft. Microsoft encourages you to buy a new laptop or PC to get Windows 10 instead of buying a copy of the software to install yourself. If you're building your own PC (I've got a checklist for the pc parts for that) you'll still have to buy a copy of Windows 10. So the simple answer is this, until January 14, 2020, you won't have to upgrade. One of the first things to do is to do a virus scan of your whole system. This checklist for making Windows faster is geared towards "healthy" computers that have already been checked for viruses. There are plenty of guides to picking a security software so I'm not going to make a new one here for you. Some is free and some has a yearly license fee. After you've checked for viruses, start going through the Making Windows 7 Faster Checklist! The original Make Windows Faster guide was written for Windows 95 and Windows 98. It's archived here.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00629.warc.gz
CC-MAIN-2018-17
1,745
8
https://gamecritics.com/dale-weir/video-fifteen-of-2011s-games-get-the-lego-treatment/
code
Created for the opening of the 15th Annual Interactive Achievement Awards, this montage depicted 15 of the year's top games. The project was created by Alex Kobbs and features a musical score called "Roll the Dice" and composed by Glen Ballard. It must have been a nice little treat for attendees. If you were not fortunate enough to attend then you can watch the entire award show below: - Extra Credits: Differences in Scale vs Differences in Kind - May 15, 2013 - Extra Credits: Why Console Specs Don’t Matter - May 3, 2013 - Extra Credits:Intrinsic vs Extrinsic - April 27, 2013
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00760.warc.gz
CC-MAIN-2023-06
584
5
https://github.com/kubernetes/kubernetes/issues/78904
code
Join GitHub today GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up Upgrade tests do not check pod instances and restart counts are identical for workload objects #78904 An upgrade of only the API server should not result in pod recreations or container restarts. The workload upgrade tests do not currently check if pod instances and container restart counts are identical. What you expected to happen: At the end of the upgrade test setup step, record the pod instances for the workload object in question, their uids, container restart counts, and associated node names and versions. In the verification step post-upgrade, if the associated nodes still exist at the same versions, verify the same pod instances still exist with the same container restart counts. For bonus points, an unrelated update (e.g. adding an annotation to the workload object) should be performed, to verify that no defaults are added in the update path that would result in a spurious rollout of new pods.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000545.97/warc/CC-MAIN-20190626194744-20190626220744-00112.warc.gz
CC-MAIN-2019-26
1,074
9
https://redinvestigadores.org/display/p-rie-riecdt-47
code
We study spillovers between REITs and stock markets in a global context. We compute both directional and net spillover indexes in a global and dynamic setting. Our findings indicate that connectedness between these markets has increased importantly over time. On average stock markets are net transmitters and REITs markets are net receivers. Considerable time variation is observed. Spillovers are higher during crises and REITs were net spillover transmitters to stock markets during the Subprime Financial Crisis. Our results have important implications for global investors.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00421.warc.gz
CC-MAIN-2024-18
578
1
http://www.reddit.com/r/leagueoflegends/comments/1306jo/digital_zilean/
code
I've had an Idea a year back. Make Zilean look a younger. Somewhere in his 30, with a shorter beard. His robes should be black and cyan. Now the big twist. Change the clock thing on his back into a Digital Time display. A 00:00 it could be somehow connected to the game timer. His basic attack would throw cyan energy like the original skin shows. The bombs could be on timer instead of a lint. So, I've seen your suggestions and I'm happy that you like the idea. I would like for someone to try and draw what I'm about to write. eard can be the same lenght, but do make it younger, the hair on his head should be shorter. The robes,like I said black and cyan but without sleeves. Most of you if not all see this skin with the digital watch on ihis back. But let me run this idea through you. Place the wstch on his chest, the buttons be metal(chrome), but inside the timepiece the would be a hole, going straight through him. The hole would be filled with chronoemergy( blue-cyan light) the hole would emit the digits on zileans back in a holo projection. While recalling zil could play with the buttons on his chest.
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776426171.91/warc/CC-MAIN-20140707234026-00038-ip-10-180-212-248.ec2.internal.warc.gz
CC-MAIN-2014-23
1,118
8
https://deepai.org/publication/landmark-detection-in-low-resolution-faces-with-semi-supervised-learning
code
accuracy or achieve detection, segmentation and pose estimation results upto subpixel accuracy. These are only few of the many tasks which have seen a significant performance improvements in the last five years. However, CNN-based methods assume access to good quality images. ImageNet, COCO, CASIA, 300W or MPII datasets all consist of high resolution images. As a result of domain shift much lower performance is observed when networks trained on these datasets are applied to images which have suffered degradation due to intrinsic or extrinsic factors. In this work, we address landmark localization in low resolution images. Although, we use face images in our case, the proposed method is also applicable to other tasks, such as human pose estimation. Throughout this paper we use HR and LR to denote high and low resolutions respectively. Facial landmark localization, also known as keypoint or fiducial detection, refers to the task of detecting specific points such as eye corners and nose tip on a face image. The detected keypoints are used to align images to canonical coordinates, which are then used as inputs to different convolution networks. It has been experimentally shown in [bansal2017dosanddonts] , that accurate face alignment leads to improved performance in face verification. Though great strides have been made in this direction, mainly addressing large-pose face alignment, landmark localization for low resolution images, still remains an understudied problem, mostly because of the absence of large scale labeled dataset(s). To the best of our knowledge, for the first time, landmark localization directly on low resolution images is addressed in this work. Main motivation: In Figure 1, we examine possible scenarios which are currently practiced when low resolution images are encountered. Figure 1 shows the predicted landmarks when the input image is a LR image of size less than pixels. Typically, landmark detection networks are trained with crops of HR images taken from AFLW and 300W datasets. During inference, irrespective of resolution, an incoming image is rescaled to . We deploy two methods: MTCNN and Bulat et al., which have detection and localization built in a single system. In Figure 1(a) and (b) we see that these networks failed to detect face in the given image. Figure 1(c), shows the outputs when a network trained on high resolution images is applied to a rescaled low resolution one. It is important to note that the trained network, say HR-LD high resolution landmark detector (detailed in Section 4.4) achieves state of the art performance on AFLW and 300W test sets. A possible solution is to train a network on sub-sampled images as a substitute for low resolution images. Figure 1(d) shows the output of one such network. It is evident from these experiments that networks trained with HR images or subsampled images are not effective for real life LR images. It can also be concluded that subsampled images are unable to capture the distribution of real LR images. Super-resolution is widely used to resolve LR images to reveal more details. Significant developments have been made in this field and methods based on encoder-decoder architectures and GANs have been proposed. We employ two recent deep learning based methods, SRGAN and ESRGAN to resolve given LR images. It is worth noting that the training data for these networks also include face images. Figure 1(e) shows the result when the super-resolved image is passed through HR-LD. It can be hypothesized that possibly, the super-resolved images do not lie in the same space of images using which HR-LD was trained. Super resolution networks are trained using synthetic low resolution images obtained by downsampling the image after applying Gaussian smoothing. In some cases, training data for super-resolution networks consists of paired low and high resolution images. Neither of the mentioned scenarios is applicable in real life situations. Main Idea: Different from these approaches, the proposed method is based on the concept of ‘generate to adapt’. This work aims to show that landmark localization in LR images can not only be achieved, but it also improves the performance over the current practice. To this end, we first train a deep network which generates LR images from HR images and tries to model the distribution of real LR images in pixel space. Since, there is no publicly available dataset, containing low resolution images along with landmark annotations, we take a semi-supervised approach for landmark detection. We train an adversarial landmark localization network on the generated LR images and hence, switching the roles of generated and real LR images. Heatmaps predicted for unlabelled LR images are also included in the inputs of the discriminators. The adversarial training procedure is designed in a way that in order to fool the discriminators, the heatmap generator has to learn the structure of the face even in low resolution. We perform extensive set of experiments explaining all the design choices. In addition, we also propose new state of the art landmark detector for HR images. 2 Related Work Being one of the most important pre-processing steps in face analysis tasks, facial landmark detection has been a topic of immense interest among computer vision researchers. We briefly discuss some of the methods which use Convolution Neural Networks (CNN). Different algorithms have been proposed in the recent past such as direct regression approaches of MTCNN by Zhang et al. and KEPLER by Kumar et al. . The convolution neural networks in MTCNN and KEPLER act as non-linear regressors and learn to directly predict the landmarks. Both works are designed to predict other attributes along with keypoints such as 2D pose, visibility of keypoints, gender and many others. Hyperface by Ranjan et al. has shown that learning tasks in one single network does in fact, improves the performance of individual tasks. Recently, architectures based on Encoder-Decoder architecture have become popular and have been used intensively in tasks which require per-pixel labeling such as semantic segmentation[25, 28] and keypoint detection[16, 1, 41, 15]. Despite making significant progress in this field, predicting landmarks on low resolution faces still remains a relatively unexplored topic. All of the works mentioned above are trained on high quality images and their performance degrades on LR images. One of the closely related works, is Super-FAN by Bulat et al., which makes an attempt to predict landmarks on LR images by super-resolution. However, as shown in experiments in Section 4.3, face recognition performance degrades even on super-resolved images. This necessitates that super-resolution, face-alignment and face recognition be learned in a single model, trained end to end, making it not only slow in inference but also limited by the GPU memory constraints. The proposed work is different from in many respects as it needs labeled data only in HR and learns to predict landmarks in LR images in an unsupervised way. Due to adversarial training, the network not only acts as a facial parts detector but also learns the inherent structure of the facial parts. The proposed method makes the pre-processing task faster and independent of face verification training. During inference, only the heatmap generator network is used which is based on the fully convolutional architecture of U-Net and works at the spatial resolution of making the alignment process real time. 3 Proposed Method The proposed work predicts landmarks directly on a low resolution image of spatial size less than pixels. We show that predicting landmark detection directly in low resolution is effective than current practices of rescaling or super-resolution. The entire pipeline can be divided into two stages: (a) Generation of LR images in an unpaired manner (b) Generating heatmaps for real LR images in a semi-supervised fashion. The diagrammatic overview of the proposed approach is shown in Figure 2. Being a semi-supervised method, it is important to first describe the datasets chosen for the ablative study. High Resolution Dataset: We construct the HR dataset by combining the training images from AFLW and the entire 300W dataset. We divide the Widerface dataset which consists of images in different resolutions captured under diverse conditions, into two groups based on their spatial size. The first group consists of images with spatial size between and , whereas the second group consists of images with more than pixels. We combine the second group in HR training set, resulting in a total of HR faces. The remaining images from AFLW are used as validation images for the ablative study and test set for the landmark localization task. Although, generation of LR images is an unpaired task, we use AFLW and 300W images for training, as the generated LR images from these datasets are used for semi-supervised learning in the second step. Low Resolution Dataset: The first group from Widerface dataset consists of faces and is used as real or target low resolution images. 3.1 High to Low Generator and Discriminator High to low generator , shown in Figure 3 is designed following the Encoder-Decoder architecture, where both encoder and decoder consists of multiple residual blocks. The input to the first convolution layer is the HR image concatenated with the noise vector which has been projected using a fully connected layer and reshaped to match the input size. Similar architectures have also been used in[6, 17] . The encoder in the generator consists of eight residual blocks each followed by a convolution layer to increase dimensionality. Max-pooling is used to decrease the spatial resolution to, for high resolution image of pixels. The decoder is composed of six residual units followed by convolution layers to reduce the dimensionality. Finally, one convolution layer is added in order to output a three channel image. BatchNorm is used after every convolution layer. The discriminator , shown in Figure 3 is also constructed in a similar way, except max-pooling is used only in the last three layers considering the inputs to discriminator are low resolution images. Referring to Figure 2, we use for input high resolution images of size , for generated LR images of size and for real LR images of the same size. We train High to Low generator using a weighted combination of GAN loss and pixel losses. loss is used to encourage convergence in initial training iterations. The final loss can be summarized in Equation 1. where and are hyperpameters which are empirically set following . Following recent developments in GANs we experimented with different loss functions. However, we use Hinge loss and Spectral Normalization in combination due to faster training. The hinge loss for the generative networks can be defined as in Equation 2: where is the distribution of real LR images from Widerface dataset, and is the distribution of generated images . The weights of the discriminator are normalized in order to satisfy the Lipschitz constraint , shown in Equation 3: Finally, pixel loss described in Equation 4 is used which minimizes the distance between the generated and subsampled images. The loss ensures that the content is not lost during the generation process. where the operation is implemented as a sub-sampling operation obtained by passing through four average pooling layers. Figure 4 shows some sample LR images generated from the network . 3.2 Semi-Supervised Landmark Localization 3.2.1 Heatmap Generator The keypoint heatmap generator, in Figure 5 produces heatmaps corresponding to N (in our case or ) keypoints in a given image. As mentioned earlier, the objective of this paper is to show that landmark prediction directly on LR image is feasible even in the absence of labeled LR data, and evaluate the performance of auxiliary tasks compared to commonly used practices of rescaling or super-resolution. Keeping this in mind, we choose a simple network based on the U-Net architecture as the heatmap generator, instead of computationally intensive stacks of hourglass networks or CPMs. The network consists of 16 residual blocks where both encoder and decoder have eight residual blocks. Eight residual blocks in the encoder are divided into four groups of two blocks each and spatial resolution is halved after each block using max pooling. The heatmap generator outputs (N+1) feature maps corresponding to N keypoints and 1 background channel. After experimentation, this design for landmark detection has proven to be very effective and has resulted in state of the art results for landmark predictions when trained with HR images (see Section 4.3). 3.2.2 Heatmap Discriminator The heatmap discriminator follows the same architecture as the heatmap generator. However, the input to the discriminator is a set of heatmaps concatenated with their respective color images. This discriminator predicts another set of heatmaps and learns whether the keypoints described by the heatmaps are correct and correspond to the face in the input image. The qualities of the output heatmaps are determined by their similarity to the input heatmaps, following the notion of an autoencoder. The loss is computed as the error between the input and the reconstructed heatmaps. 3.2.3 Heatmap Confidence Discriminator The architecture of heatmap confidence discriminator is identical to the one used in high to low discriminator, except the input is an LR image concatenated with the heatmap. This discriminator receives three inputs corresponding to the generated LR image with groundtruth heatmap, generated LR image with predicted heatmap and a real LR image with predicted heatmap. This discriminator learns to distinguish between the groundtruth and predicted heatmaps. In order to fool this discriminator, the generator should generate heatmaps which are as real or feasible (for unlabeled real LR image) as possible. The loss propagated from this discriminator enforces the generator to learn, not only to predict accurate heatmaps for images whose groundtruth are available but also for the images without annotations. This in turn enables the generator to understand the structure of the face in the given image and make accurate predictions. Switching roles of generated and real images: During training of this part of the system, the roles of generated and low resolution images are switched. While training High to Low discriminator , the generated LR images are considered to be fake so that the generator tries to generate as realistic LR image as possible. It is worth recalling that HR images have annotations associated with them. We assume that keypoint locations in a generated LR image stay relatively same as its downsampled version. Therefore, while training , the downsampled annotations are considered to be groundtruth for the generated LR images, and the networks are trained to predict heatmaps as close to the groundtruth as possible in order to fool the discriminator and . tries to predict accurate keypoints for real LR images by learning from generated LR images, and hence the switching of roles. 3.3 Semi-supervised Learning The learning process of this setup is inspired by the seminal work of Berthelot et al. in and Lecun et al. in called Energy-based GANs. The discriminator receives two sets of inputs: generated LR image with downsampled groundtruth heatmaps and generated LR images with predicted heatmaps. When the input consists of groundtruth heatmaps, the discriminator is trained to recognize it and reconstruct a similar one, i.e., to minimize the error between the groundtruth heatmaps and the reconstructed ones. On the other hand, if the input consists of generated heatmaps, the discriminator is trained to reconstruct different heatmaps, i.e., to drive the error between the generated heatmaps and the reconstructed heatmaps as large as possible. The losses are expressed as where represents the heatmap of a given image constructed by placing Gaussian with centered at the keypoint location . Inspired by Berthelot et.al. in , we use a variable to control the balance between heatmap generator and discriminator. The variable is updated every iterations. The adaptive term is defined by: where is bounded between and , and is a hyperparameter. As in Equation7, controls the emphasis on . When the generator is able to fool the discriminator, becomes smaller than . As a result of this increases, making the term dominant. The amount of acceleration to train on is adjusted proportional to , i.e the distance the discriminator falls behind the generator. Similarly, when the discriminator gets better than the generator, decreases, to slow down the training on making the generator and the discriminator train together. The discriminator is trained using the loss function from Least squares GAN as shown in Equation 9. This loss function was chosen in order to be consistent with the losses computed by which are also losses. It is noteworthy to mention that in this case represents the groundtruth-heatmaps distribution on generated LR images, while represents the distribution on generated heatmaps of generated LR images and real LR images. The generator is trained using a weighted combination of losses from the discriminators and and heatmap loss. The loss functions for the generator are described in the following equations: where and are hyper parameters set empirically obeying . We put more emphasis on to encourage convergence of the model in initial iterations. Some real LR images with keypoints predicted from the are shown in Figure 6. 4 Experiments and Results 4.1 Ablation Experiments We experimentally demonstrated in Section 1 (Figure 1) that networks trained on HR images perform poorly on LR images. Therefore, we propose the semi-supervised learning as mentioned in Section 3. With the above mentioned networks and loss functions it is important to understand the implication of each component. This section examines each of the design choices quantitatively. To this end, we first train the high to low resolution networks, and generate LR images of AFLW test images. In the absence of real LR images with annotated landmarks, this is done to create a substitute for low resolution dataset with annotations on which localization performance can be evaluated. We also generate subsampled version of the AFLW trainset and AFLW testset using average pooling after applying Gaussian smoothing. Data augmentation techniques such as random scaling , random rotation () and random translation upto pixels are used. Evaluation Metric: Following most previous works, we obtain error for each test sample by averaging normalized errors for all annotated landmarks. For AFLW, the obtained error is normalized by the ground truth bounding box size over all visible points whereas for 300W, the error is normalized by the inter-pupil distance. Wherever applicable NRMSE stands for Normalized Root Mean Square Error. All the networks are trained in Pytorch using the Adam optimizer with an initial learning rate ofand values of . We train the networks with a batch size of for epochs, while dropping the learning rates by after and epochs. Setting S1: Train networks on subsampled images? We only train network with the subsampled AFLW training images using the loss function in Equation 10, and evaluate the performance on generated LR AFLW test images. Setting S2: Train networks on generated LR images? In this experiment, we train the network using generated LR images, in a supervised way using the loss function from Equation 10. We again evaluate the performance on generated LR AFLW test images. Observation: From the results summarized in Table 0(b) it is evident that there is a significant reduction in localization error when is trained on generated LR images validating our hypothesis that subsampled images on which many super-resolution networks are trained may not be a correct representative of real LR images. Hence, we need to train the networks on real LR images. Setting S3: Does adversarial training help? This question is asked in order to understand the importance of training the heatmap generator in an adversarial way. In this experiment, we train and using the losses in Eqs 5, 6, 10, 11. Metrics are calculated on the generated LR AFLW test images and compared against the experimental setting mentioned in S2 above. Setting S4: Does trained in adversarial manner scale to real LR images? In this experiment, we wish to examine if training networks and jointly, improves the performance on real LR images from Widerface dataset.(see Section 3 for datasets) Observation: From Table 0(b) we observe that the network trained with setting S3 performs marginally better compared to setting S4. However, since there are no keypoint annotations available for the Widerface dataset, conclusions cannot be drawn from the drop in performance. Hence, in the following subsection 4.3, we leap towards understanding this phenomenon indirectly, by aligning the faces using the models from setting S3 and setting S4 and evaluating face recognition performances. 4.2 Experiments on Low Resolution images We choose to perform direct comparison on a real LR dataset. Two recent state of the art methods Style Aggregated Networks and HRNet. To create a real LR landmark detection dataset which we call Annotated LR Faces (ALRF), we randomly selected identities from the TinyFace dataset, out of which one LR image (less than pixels and more than pixels) per identity was randomly selected, resulting in a total of LR images. Next, three individuals were asked to manually annotated all the images with 5 landmarks(two eye centers, nose tip and mouth corners) in MTCNN style, where invisible points were annotated with . The mean of the points obtained from the three users were taken to be the groundtruth. As per convention, we used Normalised Mean Square Error (NRMSE), averaged over all visible points and normalized by the face size as the comparison metric. Table 0(a) shows the results of this experiment. We also calculate time for forward pass of one image in a single gtx1080. Without loss of generality, the results can be extrapolated to other existing works as and are currently state of the art. MTCNN which has detection and alignment in a single system was able to detect only faces out of test images. 4.3 Face Recognition experiments In the previous section, we performed ablative studies on the generated LR AFLW images. Although convenient to quantify the performance, it does not uncover the importance of training three networks jointly in a semi-supervised way. Therefore, in this section, we choose to evaluate the models from setting S3 and setting S4 (Section 4.1), by comparing the statistics obtained by applying the two models to align face images for face recognition task. We use recently published and publicly available, Tinyface dataset for our experimental evaluation. It is one of the very few datasets aimed towards understanding LR face recognition and consists of labeled facial identities with an average of three face images per identity, giving a total of LR face images (average pixels). All the LR faces in TinyFace are collected from the web (PIPA and MegaFace2) across diverse imaging scenarios, captured under uncontrolled viewing conditions in pose, illumination, occlusion and background. known identities is divided into two splits: for training and the remaining for test. Evaluation Protocol: In order to compare model performances, we adopt the closed-set face identification (1:N matching) protocol. Specifically, the task is to match a given probe face against a gallery set of enrolled face images with true match from the gallery at top-1 of the ranking list. For each test class, half of the face images are randomly assigned to the probe set, and the remaining to the gallery set. For the purpose of this paper, we drop the distractor set as this does not divulge new information while significantly slowing down the evaluation process. For face recognition evaluation, we report statistics on Top-k (k=1,5,10,20) statistics and mean average precision (mAP). Experiments with network trained from scratch: Since the number of images in TinyFace dataset is much smaller compared to larger datasets such as CASIA or MsCeleb-1M, we observed that training a very deep model like Inception-ResNet, quickly leads to over-fitting. Therefore, we adopt a CNN with fewer parameters, specifically, LightCNN. Since inputs to the network are images of size , we disable first two max-pooling layers. After detecting the landmarks, training and testing images are aligned to the canonical coordinates using affine transformation. We train layer LightCNN models using the training split of TinyFace dataset under the following settings: Setting L1: Train networks on generated LR images? In this setting, we use the model trained under the setting S2 from the previous section 4.1. In this setting, network is trained using generated LR images in a supervised way using the loss function from Equation 10. Setting L2: Does adversarial training help? We use the model trained from setting S3 (section 4.1) to align the faces in training and testing sets. In this setting networks and are trained using a weighted combination of pixel loss and GAN losses from Equations 5, 6, 10, 11. Setting L3: Does trained in adversarial manner scale to real LR images? In this setting, networks , and are trained jointly in a semi-supervised way. We use Tinyface training images as real low resolution images. Later, Tinyface training and testing images are aligned using the trained model for training LightCNN model. Setting L4: End-to-end training? Under this setting, we also train the High to Low networks and , using the training images from Tinyface dataset as real LR images. We reduce the amount of data-augmentation in this case to resemble tiny face dataset images. With the obtained trained model, landmarks are extracted and images are aligned for LightCNN training. Setting L5: End-to-end training with pre-trained weights? This setting is similar to the setting L4 above, except instead of training a LightCNN model from scratch we initialize the weights from a pre-trained model, trained with CASIA-Webface dataset. Observation: The results in Table 1(a) summarizes the results of the experiments done under the settings discussed above. We see that although, we observed a drop in performance in landmark localization when training the three networks jointly (Table 0(b)), there is a significant gap in rank-1 performance between setting L2 and L3. This indicates that with semi-supervised learning generalizes well to real LR data, and hence also validates our hypothesis of training , and together. Unsurprisingly, insignificant difference is seen between settings L3 and L4. Experiments with pre-trained network: Next, to further understand the implications of joint semi-supervised learning, we design another set of experiments. In these experiments, we use a pre-trained Inception-ResNet model, trained on MsCeleb-1M using ArcFace and Focal Loss. This model expects an input of size pixels, hence the images are resized after alignment in low resolution. Using this pre-trained network, we perform the following experiments: Baseline: For the baseline experiment, we choose to follow the usual practice of re-scaling the images to a fixed size irrespective of resolution. We trained our own HR landmark detector (HR-LD) on AFLW images for this purpose. Tinyface gallery and probe images are resized to and used by the landmark detector as inputs. Using the predicted landmarks, images are aligned to a canonical co-ordinates similar to ArcFace Setting I1: Does adversarial training help? The model trained for S3 (Section 4.1) is used to align the images directly in low resolution. Features for gallery and probe images are extracted after the rescaling the images and cosine distance is used to measure the similarity and retrieve the images from the gallery. Setting I2: Does trained in adversarial manner scale to real LR images? For this experiment, the model trained for L3 in Section 4.3 is used for landmark detection in LR. To recall, in this setting, the three models , and (with and frozen) are trained jointly in a semi-supervised way and Tinyface training images are used as real LR data for . Setting I3: End-to-end training? In this case, we align the images using the model from setting L4 from Section 4.3. In this case, we also trained High to low networks ( and ) using training images from Tinyface dataset as real LR images. After training the model for 200 epochs, the weights are frozen to train and in a semi-supervised way. Observation: With no surprise, we observe that (from Table 1(b)) training the heatmap prediction networks in a semi-supervised manner, and aligning the images directly in low resolution, improves the performance of any face recognition system trained with HR images. 4.4 Additional Experiments: Setting A1: Does Super-resolution help? The aim of this experiment is to understand if super-resolution can be used to enhance the image quality before landmark detection. We use SRGAN to super-resolve the images before using face alignment method from Bulat et al. to align the images. Setting A2: Does Super-resolution help? In this case, we use ESRGAN to super-resolve the images before using HR-LD (below) to align. Observation: It can be observed from Table 3, that face recognition performance obtained after aligning super-resolved images is not at par even with the baseline. It can be hypothesized that possibly super-resolved images do not represent HR images using which or HR-LD are trained. High Resolution Landmark Detector (HR-LD) For this experiment, we train on high resolution images of size (for AFLW and 300W) using loss from Equation 10. We evaluate the performance of this network on common benchmarks of AFLW-Full test and 300W test sets, shown in Table 4. We would like to make a note that LAB and SAN either uses extra data or extra annotations or larger spatial resolution to train the deep networks. A few sample outputs are shown in Figure 8 In this paper, we first present an analysis of landmark detection methods when applied to LR images, and the implications on face recognition. We also discuss the proposed method for predicting landmarks directly on LR images. We show that the proposed method improves face recognition performance over commonly used practices of rescaling and super-resolution. As a by-product, we also developed a simple but state of the art landmark detection network. Although, low resolution is chosen as the source of degradation, however, the method can trivially be extended to capture other degradations in the imaging process, such as motion blur or climatic turbulence. In addition, the proposed method can be applied to detect human keypoints in LR in order to improve skeletal action recognition. In the era of deep learning, LR landmark detection and face recognition is a fairly untouched topic, however, we believe this work will open new avenues in this direction. This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. - A recurrent autoencoder-decoder for sequential face alignment. http://arxiv.org/abs/1608.05477. Accessed: 2016-08-16. M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the art IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014. - D. Berthelot, T. Schumm, and L. Metz. BEGAN: boundary equilibrium generative adversarial networks. CoRR, abs/1703.10717, 2017. - A. Bulat and G. Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In International Conference on Computer Vision, volume 1, page 8, 2017. - A. Bulat and G. Tzimiropoulos. Super-fan: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with gans. CoRR, abs/1712.02765, 2017. - A. Bulat, J. Yang, and G. Tzimiropoulos. To learn image super-resolution, use a gan to learn how to do image degradation first. European Conference on Computer Vision, 2018. - X. P. Burgos-Artizzu, P. Perona, and P. Dollar. Robust face landmark estimation under occlusion. ICCV, 0:1513–1520, 2013. - Z. Cheng, X. Zhu, and S. Gong. Low-resolution face recognition. CoRR, abs/1811.08965, 2018. - J. Deng, J. Guo, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. CoRR, abs/1801.07698, 2018. - X. Dong, Y. Yan, W. Ouyang, and Y. Yang. Style aggregated network for facial landmark detection. In CVPR, pages 379–388, 2018. - I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014. - Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. CoRR, abs/1607.08221, 2016. - M. Koestinger, P. Wohlhart, P. M. Roth, and H. Bischof. Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization. In First IEEE International Workshop on Benchmarking Facial Image Analysis Technologies, 2011. - A. Kumar, A. Alavi, and R. Chellappa. Kepler: Keypoint and pose estimation of unconstrained faces by learning efficient h-cnn regressors. In 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), pages 258–265, May 2017. - A. Kumar and R. Chellappa. A convolution tree with deconvolution branches: Exploiting geometric relationships for single shot keypoint detection. CoRR, abs/1704.01880, 2017. - A. Kumar and R. Chellappa. Disentangling 3d pose in A dendritic CNN for unconstrained 2d face alignment. CoRR, abs/1802.06713, 2018. - C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. - T. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. CoRR, abs/1708.02002, 2017. - T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, Computer Vision – ECCV 2014, pages 740–755, Cham, 2014. Springer International Publishing. - J. Lv, X. Shao, J. Xing, C. Cheng, and X. Zhou. A deep regression architecture with two-stage re-initialization for high performance facial landmark detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. - X. Mao, Q. Li, H. Xie, R. Y. K. Lau, and Z. Wang. Multi-class generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076, 2016. - T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. CoRR, abs/1802.05957, 2018. - A. Nech and I. Kemelmacher-Shlizerman. Level playing field for million scale face recognition. CoRR, abs/1705.00393, 2017. - A. Newell, K. Yang, and J. Deng. Stacked Hourglass Networks for Human Pose Estimation, pages 483–499. Springer International Publishing, Cham, 2016. - H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. arXiv preprint arXiv:1505.04366, 2015. - R. Ranjan, V. M. Patel, and R. Chellappa. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. CoRR, abs/1603.01249, 2016. - S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at 3000 FPS via regressing local binary features. In CVPR, pages 1685–1692, 2014. - O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. - O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. - C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In 2013 IEEE International Conference on Computer Vision Workshops, pages 397–403, Dec 2013. - K. Sun, B. Xiao, D. Liu, and J. Wang. Deep high-resolution representation learning for human pose estimation. In CVPR, 2019. - C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR, abs/1602.07261, 2016. - G. Trigeorgis, P. Snape, M. A. Nicolaou, E. Antonakos, and S. Zafeiriou. Mnemonic descent method: A recurrent process applied for end-to-end face alignment. In CVPR, Las Vegas, USA, June 2016. - X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, C. C. Loy, Y. Qiao, and X. Tang. ESRGAN: enhanced super-resolution generative adversarial networks. CoRR, abs/1809.00219, 2018. - S. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. CoRR, abs/1602.00134, 2016. - W. Wu, C. Qian, S. Yang, Q. Wang, Y. Cai, and Q. Zhou. Look at boundary: A boundary-aware face alignment algorithm. In CVPR, 2018. - X. Wu, R. He, and Z. Sun. A lightened CNN for deep face representation. CoRR, abs/1511.02683, 2015. - Xuehan-Xiong and F. De la Torre. Supervised descent method and its application to face alignment. In CVPR, 2013. - S. Yang, P. Luo, C. C. Loy, and X. Tang. Wider face: A face detection benchmark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. - D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. CoRR, abs/1411.7923, 2014. - J. Zhang, S. Shan, M. Kan, and X. Chen. Coarse-to-fine auto-encoder networks (cfan) for real-time face alignment. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, ECCV, volume 8690 of Lecture Notes in Computer Science, pages 1–16. Springer International Publishing, 2014. - K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499–1503, Oct 2016. - N. Zhang, M. Paluri, Y. Taigman, R. Fergus, and L. D. Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. CoRR, abs/1501.05703, 2015. - Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV, pages 94–108, 2014. - J. J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. CoRR, abs/1609.03126, 2016. - S. Zhu, C. Li, C. Change Loy, and X. Tang. Face alignment by coarse-to-fine shape searching. June 2015.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00363.warc.gz
CC-MAIN-2020-29
39,632
130
https://www.oreilly.com/library/view/java-io/1565924851/ch16s02.html
code
Choosing a Locale Number formats are dependent on the locale ; that is, the country/language/ culture group of the local operating system. The number formats most English-speaking Americans are accustomed to use are a period as a decimal point, a comma to separate every three orders of magnitude, a dollar sign for currency, and numbers in base 10 that read from left to right. In this locale, Bill Gates’s personal fortune, in Microsoft stock alone as of January 12, 1998, is represented as $74,741,086,650. However, in Egypt this number would be written as: The primary difference here is that Egyptians use a different set of glyphs for the digits through 9. For example, in Egypt zero is a and the glyph means 6. There are other differences in how Arabic and English treat numbers, and these vary from country to country. In most of the rest of North Africa, this number would be $74,741,086,650 as it is in the U.S. These are just two different scripts; there are several dozen more to go! Java encapsulates many of the common differences between language/script/culture/country combinations in a loosely defined group called a locale. There’s really no better word for it. You can’t just rely ...
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00670.warc.gz
CC-MAIN-2024-18
1,209
5
https://www.hindawi.com/journals/jhe/2019/9874591/
code
Machine Learning for Medical ImagingView this Special Issue Editorial | Open Access Geng-Shen Fu, Yuri Levin-Schwartz, Qiu-Hua Lin, Da Zhang, "Machine Learning for Medical Imaging", Journal of Healthcare Engineering, vol. 2019, Article ID 9874591, 2 pages, 2019. https://doi.org/10.1155/2019/9874591 Machine Learning for Medical Imaging Machine learning contains a set of methods, which allow a machine to learn meaningful patterns from data directly with minimal human interaction. The strength of a machine-learning technique is, in part, dependent on human knowledge. Such knowledge can help a machine to learn more efficiently through techniques like appropriate feature selection, transfer learning, and multitask learning. Through this symbiosis, machine learning has been successfully applied in many applications and achieves state-of-the-art performance [1–4]. More recently, machine-learning techniques have been applied to the field of medical imaging [5, 6]. With fast improving computational power and the availability of enormous amounts of data, deep learning has become the default machine-learning technique that is utilized since it can learn much more sophisticated patterns than conventional machine-learning techniques. Unlike conventional machine-learning techniques, deep learning methods greatly simplify the feature engineering process and some have even been applied to raw data directly. This is especially important for the field of medical imaging analysis since it can take years of training to obtain adequate domain expertise for appropriate feature determination. Hence, this allows more researchers to exploit new ideas easier and faster. Among all deep learning methods, convolutional neural networks (CNNs) are of special interest. By exploiting local connectivity patterns efficiently with shared weights, CNN, such as those utilized in the ImageNet competition , has quickly become a state-of-the-art method for image processing. Naturally, there are many recent works trying to apply CNN on medical image analysis [9, 10]. With methods like the rectified linear unit and deep residual learning alleviating issues such as the vanishing gradient problem, deeper models can be trained more efficiently and hence pushing deep learning to another level. However, there are still many remaining challenges, e.g., inconsistencies in data formats and lack of reliable training data, which need to be addressed. An active research topic is how to optimize the transfer of human knowledge to a machine-learning model. This special issue focuses on applying machine-learning techniques to medical imaging data and covers topics from traditional machine-learning techniques, e.g., principle component analysis and support vector machine, to more recent ones, such as CNN. Transfer learning, which is used to address the issue of lacking sufficient medical image data for training, is also discussed. With the successful application of these techniques, papers in this special issue show progress on many fronts, such as the diagnosis of Alzheimer’s disease and liver tumor segmentation. We hope that the readers will find these topics interesting. Conflicts of Interest The editors declare that there are no conflicts of interest regarding the publication of this special issue. - C. M. Bishop, Pattern Recognition and Machine Learning, Springer, Berlin, Germany, 2006. - E. Brynjolfsson and T. Mitchell, “What can machine learning do? Workforce implications,” Science, vol. 358, no. 6370, pp. 1530–1534, 2017. - G. Hinton, L. Deng, D. Yu et al., “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012. - M. Ibnkahla, “Applications of neural networks to digital communications—a survey,” Signal Processing, vol. 80, no. 7, pp. 1185–1215, 2000. - X. Chen, Z. J. Wang, and M. McKeown, “Joint blind source separation for neurophysiological data analysis: multiset and multimodal methods,” IEEE Signal Processing Magazine, vol. 33, no. 3, pp. 86–107, 2016. - V. D. Calhoun, J. Liu, and T. Adalı, “A review of group ICA for FMRI data and ICA for joint inference of imaging, genetic, and ERP data,” NeuroImage, vol. 45, no. 1, pp. S163–S172, 2008. - Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 28, pp. 436–444, 2015. - A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017. - H.-C. Shin, H. R. Roth, M. Gao et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, 2016. - S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain tumor segmentation using convolutional neural networks in MRI images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1240–1251, 2016. - X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, vol. 15, no. 11–13, pp. 315–323, Fort Lauderdale, FL, USA, April 2011. - K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, Las Vegas, NV, USA, June 2016. Copyright © 2019 Geng-Shen Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00003.warc.gz
CC-MAIN-2020-45
5,825
23
https://www.aporem.net/posts/2006/10/licensing-woes/
code
This post is the second in the series developing against third party software. First, assume that your software developement project is at a point where you have to decide to use third party software for the first time. (Since each foreign license imposes some restrictions on your project, adding extra third party software has to be dealt with separately.) If you’re lucky, you can choose between several products that solve your programming problem. Aside the evaluation, which product suites your needs best, you should also take their license into account. And, believe it or not, there is no such thing as a license free software! (I won’t comment on the “ethical” pros and cons of any licensing scheme and just point out which implications the licenses have on your project.) The license scheme that imposes the least restrictions to the programmer probably is everything BSD-derived. You can do anything with the software as long as you honor (=mention) the originator in your on work. At the other end is the GPL scheme, which has a, for commercial developers, feared “viral” quality. This means, that each modified version of such a product has to be licensed under GPL again. But there is also a less strict version called LGPL which is used for most GPL libraries, in order to allow developers to link again GPL software without imposing any licensing restrictions. The scope of commercial licenses is usually negotiable with the only limitation on what you can afford to pay. The non-negotiable core is usually the re-licensing without royalties to the original author and the disclosure of the inner workings of the software. Now to the hard question (to everyone without a degree from a law school): which licenses can be mangled and how? The answer is: it depends. And ask your lawyer! But to give you some ideas, here are some obvious cases. Using products under BSD licenses or anything less restrictive usually doesn’t change anything. You have to be very careful, giving away source code of your own work if you have commercial licensed software in your project. The best way is to seal your code off from the third party code. This also has the benefit of being independent of this source. Later you can switch (or even better: you can allow your customers to switch) to another vendor. You don’t have to think much about using software under GPL if it won’t leave your production environment. So, enough dry theory, back to fun, back to coding!
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710980.82/warc/CC-MAIN-20221204204504-20221204234504-00602.warc.gz
CC-MAIN-2022-49
2,484
1
https://www.cyberoid.in/laravel-training-in-kottayam.html
code
It makes the most effective use of HTML and builds the economical websites and applications. Cyberoid brings you a comprehensive and interactive Laravel training course that will help you understand the fundamentals of the Laravel framework. You can also attend free demo sessions before enrolling for the course at our institute. Our endeavor remains to coach students as however parts in Laravel work with one another. The cms school within the state was established in kottayam in 1840. it's additionally a entryway to alternative journeying destinations like Sabarimala, Mannanam, Vaikom, Ettumanoor, Bharananganam, Erumeli and Manarkud. Ruins of palaces and forts will still be seen here. The district was additionally the middle of a state-led agitation for the accountable government of Travancore. .
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00468.warc.gz
CC-MAIN-2020-40
807
2
https://community.bitwarden.com/t/woke-up-this-morning-and-none-of-my-otp-are-working/11685
code
So I woke up this morning, and none of my OTP are working. I copied the “master key” from bitwarden to authy, and now I can access my accounts. Very Scary. Anyone know what’s going on here? What do you mean by “copied master key from bitwarden to authy”. Your “master key” is your password unless you mean something else. And what did you put your “master key” into for authy? Authy doesn’t deal with password except for the backup. And what are you using for OTP since authy is an OTP provider, it’s not clear if you’re talking about authy or bitwarden. @aaronstuder You’re talking about your TOTP Authenticator key? Those seeds are time-based, so if the time on your device is off it could cause an invalid code to be generated. Not sure what system your running but ya like @tgreer said those are time based, check your system time is accurate.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103619185.32/warc/CC-MAIN-20220628233925-20220629023925-00631.warc.gz
CC-MAIN-2022-27
873
6
https://features.cpanel.net/topic/nameserver-only-accounts
code
Nameserver Only Accounts It would be excellent if cPanel provided a control panel centered around the DNS-Only and/or BIND portion of the system. I know that it would be great if my dedicated and vps server customers (non webhosting types) could manage their name servers via the already existent system I have setup (cPanel server + DNS-Only machines). My vision would be as follows. I create the account with the following: - domain name - nameserver selection (list of those with trusted relationship or by lookup) When it creates the account, it will create the zones for that domain and ip. It would then be sweet to have a zone for the control panel at the server's ip address to allow for them to easily get to their domain control panel... something like dcp.domain.com. A separate zone template would be ideal. Original thread: http://forums.cpanel.net/f145/nameserver-only-accounts-165646.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704843561.95/warc/CC-MAIN-20210128102756-20210128132756-00006.warc.gz
CC-MAIN-2021-04
903
8
https://www.mikethefanboy.com/tag/aziz-ansari-signing-autographs/
code
It’s Parks and Rec week here at MTF! Two stars of the hit NBC series that I’m really sad to see go off the air, have written new books and signed editions are now available! Share on Facebook Aziz Ansari signing autographs Archive I may have mentioned it earlier, but it’s Parks and Rec week here at MTF and James from the UK was actually here in the States! He headed down to the set as they were filming in Chicago and got to see all the action first hand.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00503.warc.gz
CC-MAIN-2022-27
464
3
http://www.traditionalmusic.co.uk/kindergarten-songs/index-of-songs%20-%200186.htm
code
|Share page||Visit Us On FB| Index to Kindergarten Songs Oh, pretty bird of colored light. See Gaynor. Light bird. SCI Oh, pretty white clouds, now what have you done? See Reed Cloudy day. TGS *Oh rest in the Lord. Mendelssohn. TLB Oh, ring, glad bells. Herron. WS Oh, ring, ring, ring, ring, merry bells. See Hitte. Merry Christmas bells. DM Oh, Sally Waters. See Sally Waters. JB Oh, say, busy bee, whither now are you going? See Busy (Bees. MSG) Oh, say, can you see? See Key. Star-spangled banner. EFS— FS—GS—MSG Oh, say have you heard of the sing-away bird? See Millard. Singaway bird. StN Oh, say Mister Cube, what now are you hiding? See Cube song no. 1. EL Oh, see my pigeon-house, so high! See Kohl. Pigeon-house SM Oh, see the carpenter. See Froebel. Carpenter. MP (Hubbard. Oh, see the carpenter. MSG) Oh, see the gate! it opens wide. See Froebel. Farmyard SM Oh, see the light. Hubbard. MSG (Froebel. Little window. MP) (Wiggin. Window. KC) Oh, see the little window bright. See Froebel. Little window. (Wiggin. Window. KC) (Hubbard. Oh! see the light. MSG) *Oh, see the snow, the falling snow. Hailmann. HR (Hubbard. See the snow is falling fast. MSG) (Walker. Snow. WS) Oh, see the snow is falling now. See Hubbard. See the snow is falling fast. MSG (Hailmann. Oh! see the snow. HR) (Walker. Snow. WS) Oh, see the window I have here. See Beethoven. Little window. HR Oh, see the shining skating pond. See Koehler. Skating. HR Oh, shall I sing you a song that tells you how? See Sowing song. KK Oh Shenandoah, I long to hear you. See Shenandoah. NEB2 *Oh sing with the cheery voices. Smith. SL2 Oh, sun-beams that dance on the summer sea. See Rust. Summer shower. EL Oh, swan of slenderness. See Little red lark. EFS—FS Oh take, thou lovely child of spring. See Grieg. First primrose.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540521378.25/warc/CC-MAIN-20191209173528-20191209201528-00463.warc.gz
CC-MAIN-2019-51
1,805
23
https://www.gamingonlinux.com/2023/01/an-interview-with-the-creator-of-the-heroic-games-launcher/
code
Interested to learn a little about the people who make cool open source programs? Today I have interview with Flávio, the creator of the popular Heroic Games Launcher used on Linux desktop, Steam Deck, macOS and Windows. Q: First of all can you introduce yourself "Hi! My name is Flávio, I am a developer originally from Brazil but moved to Sweden 4 years ago. Father of 2 kids and passionate about tech, open source, gaming, history and heavy metal." Q: How did you get started with programming? "Well, I started working on IT in general when I was 16 and at that time I started to experiment with web development. Professionally I started to work as a developer around 7 or 8 years ago. Web and Android development." Q: Tell us about the Heroic Games Launcher, what is it and why did you make it? "Heroic is an open source games launcher, it supports downloading, installing and playing games from Epic and GOG stores. It is available for Linux, Windows and macOS. Behind the curtains it uses the great tool called Legendary to deal with the Epic games and our in house solution called GOGDL to deal with the GOG games. Besides that, it is also possible to add your own games using what we call ‘Sideload’ feature, so since a couple of releases ago you can use Heroic even if you do not have an Epic or GOG account. I use Linux since 2007 and always used it to play some games. But since Valve has developed Proton and DXVK and VKD3D started to be a thing, I migrated 100% to play on Linux, since almost all my library was playable on it and it is the system I also use for everything else. When Epic started to give free games was when I discovered Legendary and started using it to play those games on Linux. One example of a game that I finished at the time using it was Control, that was Epic exclusive for a while and I really wanted to play it. So since Legendary is a CLI tool and no GUIs were available for it at the time, I saw a good opportunity to contribute to the open source community and started to think on how I could develop a GUI for it with the developer stack I am used to. I wrote a post on the linux gaming community on reddit to see first if people were interested in that and I got more than 300 comments and people were really excited for that. A couple of days after the post I released the first version with basic features. Then I asked my wife to help me with the UI/UX (she is a designer) and she did some research on the gaming community about the launchers we have on Linux and based on that feedback she started to make the design for it and we started to improve based on the feedback we were receiving as well." Q: Has anyone from Epic Games or GOG reached out to you about the Heroic Games Launcher at all? "No, so far no one has reached us about it." Q: How difficult is it to support these stores that don’t support Linux directly with a launcher? What problems do you face? "Well, first is the lack of documentation. Since they do not have an open API, both Legendary and GOGDL are reverse engineering them to make the basic stuff. Some things for GOG are still not available, some Galaxy features for instance. But Linguin (one of our devs) is putting a lot of effort into improving this part and supporting GOG was a huge milestone for the project. And in parallel he is also developing Nile, a CLI tool for Amazon games that we will implement a frontend in Heroic pretty soon." Q: Heroic has been out for a while now, how have you found the reception to it? How’s things going overall? "Well, we know that a lot of people really dislike Epic Games and part of that is related to some comments from their CEO about Linux. In general the reception was always good and several people came to give me thanks for the project and how it was making their gaming life more productive on Linux. But there were some people that were saying that I should not waste my time on that, that I should do other stuff. Or they were complaining that it was an Electron App and I should write it in another framework or language. Bottom line, deciding to stick with Electron was the best decision I made for the project, since it makes it a lot easier to distribute the app and that is why we have AppImage, Flatpaks, debs, RPMs and also why was somehow simple to distribute it on Windows and macOS as well. Some people come to me from time to time saying that they discovered that it was possible to game on Linux because they were using Heroic on Windows and saw that there was a Linux version. So this made them experiment and even migrate to Linux because of that and I think this is pretty cool." Q: Why do you think people should use Heroic over Lutris, or Bottles or *insert other app*? Do you see them as competition, or is it more like companions? "I have a lot to thank Lutris and also PlayOnLinux for making my life a lot easier to play games on when this was a pretty hard task and we did not have Proton, etc. Lutris was definitely an inspiration for Heroic and still is. We have a great relationship with the developers of Bottles and Lutris and they help us when we need and vice-versa. The way I see it is that we are united for the same cause, that is to make gaming on Linux easier and accessible for everyone, especially newcomers. Sometimes some games won’t work on Heroic and they work fine on Lutris or on Bottles and the other way around so having alternatives is great and that is the best thing of the Open Source community. So, cheers to all of them! :)" Q: How did you get started with Linux and what’s your favourite distribution and why? "I started with Linux in 2007, my first experience was not that good, it was with Fedora 6 or 7 I guess and GNOME at that time was not really welcoming so I hated it. After a couple of months, a friend of mine showed me a Brazilian distro called Kurumin that came with KDE3 and had several scripts and automations for everything. We need to remember that at that time even to use a CD or USB drive you needed to mount it manually on the terminal, so having these automations and scripts was great. So then I started to use it and love it. Nowdays I don’t have a favourite distribution. The ones I used the most were OpenSuse (around 2 years), Slackware and Gentoo for around 1 year each and then Manjaro for I guess almost 4 years until I discovered Garuda that is my distro of choice today. Although I want to experiment with Vanilla OS now from my friend Mirko from the Bottles team. I basically only hear great things about it. The thing is that I don’t really have time to setup things from scratch so I like to use distros that does most of the work for me and I just need to configure basic things for my workflow." Q: What is your opinion on the Steam Deck? Valve sent the developer Lutris a Steam Deck, did you ever hear anything from Valve on a bit of support to help you? "Steam Deck is basically the best thing in Linux gaming since forever I would say. The amount of work that Valve did on it and making all this progress available for the Linux community is amazing. It is a great piece of hardware and it’s software is only getting better. Valve never got in touch to send a Deck to us but we were able to buy one with donations from our community that I think it is great. I think makes sense that Valve sent one to the Lutris team due to the importance and longevity of the project." Q: What are your favourite games? Give us your top 5! "Oh, that is a really hard question because I play games since I was 6 and I play on console as well. I will probably forget some but If I need to get 5 games that I had a pretty good time those would be: - Castlevania SOTN - Metal Gear Solid 3 - NieR: Automata - The Witcher 3" Q: If someone wanted to help with development on the Heroic Games Launcher, what are you looking for? How can they get involved? "There are several ways of helping with Heroic development, even if you are not a developer you can help with translations on our Weblate project. Developers should first be nice people, I have to say that I consider myself pretty lucky because our team today is pretty nice and we work pretty good together and they made a lot for the project. Our stack is basically NodeJS, Typescript, CSS/SASS and ReactJS. On our GitHub we have issues and feature requests with a label called ‘Good-First-Issue’ that is where people can start to understand how the code works and so on. We have opportunities for all levels of developers from students until seniors and Heroic is a great way to put something on your CV. We have even created a Heroic Organization on Linkedin so people can add their contributions and link to our official page there. This is good if you are searching for work to show their skillset. If you develop with Python you can also get involved on GOGDL or even Legendary as well. So if you want to contribute, just go to our Discord community and get in touch with us :)" Q: What are you most excited about for the future of Linux gaming as a whole? "I am pretty excited about that because gaming on Linux today is totally proven and I believe this year will get even better. I mean, if the Steam Deck made Ubisoft add Easy Anti-Cheat support for Linux on The Division 2, I really can only expect things will get even better from now on. I think 2023 will be simply great for us and we will have more and more support from big publishers." Q: Is there anything you would really love to add to the Heroic Games Launcher, that’s just perhaps out of reach right now? "I think there are several things that could (and perhaps will) be added to Heroic in the future. We plan on adding more stores of course, Amazon, maybe Itch.io and others. We also are planning a way of automating fixes like Proton does with Protonfixes by using a database of what we call ‘workarounds’. We are also working on a new design for Heroic to make it more modern but also with a better UX in general. I would like to have is a native support for EA and Ubisoft games because their official launchers simply sucks, not only on Linux, but on Windows they are terrible as well. We have started to investigate and look into them, so might be possible in the future but definitely not an easy task." Big thank you to Flávio for taking the time to have a chat and answer my questions.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817036.4/warc/CC-MAIN-20240416000407-20240416030407-00117.warc.gz
CC-MAIN-2024-18
10,353
61
http://stackoverflow.com/questions/1649169/need-an-encrypted-online-source-code-backup-service/1649281
code
Please note this is not a question about online/hosted SVN services. I am working on a home based, solo developer, project that now has commercial significance and it is time to think about remote source code backup. There is no need for file level check in/out, all I need is once a day or once a week directory level snapshot to remote storage. Automatic encryption would be a bonus to protect my IP. What I have in mind is some sort of GUI interface app that will squirt a source code snapshot off to an Amazon S3 bucket on an automatic schedule. (My development PC runs on MS Windows.)
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458592487.16/warc/CC-MAIN-20150501053632-00008-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
589
4
https://stackshare.io/npm-bcryptjs
code
What is bcryptjs? bcryptjs is a tool in the npm Packages category of a tech stack. bcryptjs is an open source tool with 3.3K GitHub stars and 269 GitHub forks. Here’s a link to bcryptjs's open source repository on GitHub Who uses bcryptjs? 11 companies reportedly use bcryptjs in their tech stacks, including Tabulo, Tech Stack, and AlternateCMS. 46 developers on StackShare have stated that they use bcryptjs.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00273.warc.gz
CC-MAIN-2024-10
412
6
https://forum.fysetc.com/d/60-tmc-connection-error
code
I have tested all that. The only wires on are those for power, (named VIN, BED and HEAT on the board) and, of course, the lcd. I also reflash it, this morning, with the firmware you released yesterday. Whatever I do, I have the "TMC connection error" message and X axis is wrong. 10mm asked on lcd gives a 20mm move. Wich gives a wrong and dangerous auto home... as it travel 2 times needed.... (I was software tester in a past life, I'm used to test and retest in differents ways before asking the tech support ;-) )
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00521.warc.gz
CC-MAIN-2019-47
517
4
https://crankymanslawn.com/2013/11/17/cranky-mans-lawn-13-winter-feeding-time/
code
Just a reminder … If you haven’t done so yet, the next week or so will your last opportunity to apply a winter feeding to your lawn. It’s important for this to be done before the first freeze and the lawn goes dormant for the Winter. Winter feedings are stored in the lawn’s root system, and will provide a spurt of growth in the early Spring that sets the stage for your 2014 lawn. Got my application done yesterday. What are you waiting for?!?
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00856.warc.gz
CC-MAIN-2023-06
453
4
http://alicephun.blogspot.com/2011_09_01_archive.html
code
I have been inactive and have not updated so much. I have not updated this blog, my yelp, or my facebook. I am working on updating my social media websites and keeping up. I also think that it is too early to state my goals of what I want to do with my continuation of updates. It might be that I want a record of what I do online and to keep for memory. As a result of my inactivity or should I say procastination, I did not pay attention to my summer to do list. However, it does not mean that I did not do much during the summer or that I did not do atleast one of the things I listed. I am glad that I made the list. To me it just means that I had a very busy and fulfilling summer and that there was so much for me to do, yet so little time. The lesson that it gave me is that in order for me to keep updating these websites I must balance both time and have motivation to continue.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00066-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
887
1
https://oofhours.com/2021/04/03/automating-disk-cleanup-on-windows-10/
code
I’ve seen a variety of blogs over the years that talk about how to do this, but I never took the time to actually try it myself. No time like the present. It turns out that the process is fairly simple: Set registry values to say what you want to clean up and then launch CLEANMGR.EXE with the right command line options. Since I wanted to put this into an MDT task sequence, I also wanted to wait until the process was done. The bulk of the work for this was amazingly already documented by Microsoft. Combine that documentation with some sample PowerShell scripts on StackOverflow and you can see where my script was derived, I just simplified it a little: Get-ChildItem -Path ‘HKLM:\Software\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches’ | New-ItemProperty -Name StateFlags001 -Value 2 -PropertyType DWORD Start-Process -FilePath CleanMgr.exe -ArgumentList ‘/sagerun:1’ -WindowStyle Hidden -Wait Get-Process -Name cleanmgr,dismhost -ErrorAction SilentlyContinue | Wait-Process -Timeout 900 (This is a three-line script, so if you copy it, make sure the first line doesn’t have a break in it.) To run that script from MDT, copy the script into your deployment share’s “Scripts” folder and set up a “Run PowerShell script” step: The script enables every cleanup item possible (line #1), starts CLEANMGR.EXE (line #2), then waits for separate CLEANMGR.EXE and DISMHOST.EXE processes to complete before exiting. I added the 15-minute (900 second) timeout because when I ran the script to test it on my server, the processes never ended, so it’s good to give up at some point. Now, whether it does much good during the image creation process is a separate debate. It might free up a small amount of space; don’t expect miracles. Categories: Windows 10 So when running this, did you have any issue with it affecting the way your system run? Meaning, Windows 10 issues after. I run certain software suit & VMware for automation use & am wondering if this would affect that. Or was your test just under an controlled environment that you didn’t care if a bug was created? Thanks for your timw I wouldn’t expect this to cause any issues, since Windows will periodically run it automatically too. It’s really just cleaning up random logs, temp folders, and other “known cruft.” It’s not doing anything aggressive (compared to other third-party tools that can get a little too aggressive). Thanks for the quick reply On my system I need StateFlags0001 and not StateFlags001 for it to work. Makes sense, because sageset can have the value from 0 to 9999
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00005.warc.gz
CC-MAIN-2023-23
2,592
17
https://software.intel.com/en-us/search/site/field_form_factor/laptop-42645/field_form_factor/tablet-42648/field_programming_language/fortran-20804/language/en
code
“Why Should I Update GCC x86 Compiler?” or “GCC Compiler Performance on Intel® Atom™ from Version to Version” I’ll try to figure out what is new for Intel® Atom™ architecture in new versions of GCC and how this affects performance and code size on the well-known EEMBC CoreMark* benchmark: We had an ask from one of the various "Birds of a Feather" meetings Intel® holds at venues such as at the Super Computing* (SC) and International Super Computing* (ISC) conferences. Click "Download Now" below to obtain and view Intel® Platform Analysis Library Metrics Framework release notes. Operating System Requirements This page provides system requirements and release notes for Intel® System Studio.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00262.warc.gz
CC-MAIN-2019-35
715
6
http://forum.haxeflixel.com/category/4/help?page=33
code
Tried a few things on a "pure" OpenFL project too and screen grabbing resulted in some problems too (the object wouldn't appear on the screenshot), posting it to the github issue tracker, let's see if someone can give it an in-depth look :) For now, I guess I'll have to use another thing for my pause screen :sweat_smile: Now I am getting this odd error.... I have version 1.9 installed, if I understand this correctly [javac] Compiling 6 source files to /path/to/project/export/android/bin/deps/extension-api/bin/classes [javac] error: Source option 1.5 is no longer supported. Use 1.6 or later. [javac] error: Target option 1.5 is no longer supported. Use 1.6 or later. /opt/android-sdk/tools/ant/build.xml:601: The following error occurred while executing this line: /opt/android-sdk/tools/ant/build.xml:720: The following error occurred while executing this line: /opt/android-sdk/tools/ant/build.xml:734: Compile failed; see the compiler error output for details. I have also tried exiting the build.xml file in the sdk directory I tried 1.6, 1.7, 1.8, 1.9 and nothing is working (especially 1.9 the one I have, it does not support some option listed) Thanks for your input @neal. For now I'm either going to prevent the double-fire with a boolean check, or break the code out of update() to a new function and run a timer at 30FPS. The boolean check will probably be more reliable. If anyone knows of a better way to handle this, please let me know. Update: Definitely going with the boolean check. The timer may have worked on a simple example but not so good in the real world.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00528.warc.gz
CC-MAIN-2019-22
1,586
14
http://blog.tahoepartners.com/index.php/sharepoint-2013-apps-six-reasons-you-should-care/
code
With the launch of SharePoint 2013, Microsoft introduced the SharePoint App Model – a new way to add functionality to your SharePoint-based sites. There are tons of good articles digging into the App Model, so we’ll cover the highlights here and provide links to more details. You can think of apps for SharePoint the same way you think of apps for your iPhone, Android, or Windows Phone. Just like apps for a phone or tablet (such as a game or banking application), apps for SharePoint provide a specific piece of functionality that enhance the capabilities of the SharePoint site in which it is used. You might have an app on your phone that provides weather information; likewise, a SharePoint app can be placed on your intranet to display the weather for the user’s zip code. The image below shows some recently popular SharePoint apps on the Office Store: APPS vs. WEB PARTS Just as SharePoint users have become accustomed to talking about web parts, Microsoft has changed the game from web parts to apps. A common question involves the difference between apps and web parts. From an end user perspective, web parts and apps are similar – they both provide a way to add functionality to a SharePoint-based web site. Once created, they can be leveraged on multiple sites with proper deployment and configuration. The primary difference between apps and web parts is how they are developed and deployed. Web parts are built to run within SharePoint and are deployed directly to SharePoint Servers. Apps run outside of the SharePoint environment and are simply added to SharePoint sites. One benefit of this is that poorly built apps will not impact your site like poorly built web parts can. WHAT ARE THE ADVANTAGES OF APPS? WHO BUILDS APPS and WHERE ARE THE APPS? Apps are built by your developers, by consultants/contractors you hire, and by third parties. These apps are stored either on Microsoft’s Office Store or in your organization’s own App Catalog. Just like apps for your phone and tablet, apps on the Office Store are provided by third parties and have costs that vary from free to hundreds of dollars. Your company can create its own App Catalog to store the apps you’ve developed. This gives your users a centralized place to locate approved functionality they can easily add to the SharePoint sites they manage. Apps that are created in one area of your company are now easy to share across the organization. ON-PREMISES or CLOUD If you’ve read the previous blog on SharePoint 2013 in the Cloud, you know that Microsoft is pushing SharePoint Online and making sure SharePoint Online capabilities are as robust as those in the on-premises versions. In keeping with this goal, both on-Premises and SharePoint Online environments can take advantage of apps. ARE APPS THE BEST THING EVER? No, they are not. While we’ve listed many features and associated advantages, Microsoft still has work to do in this area. For example, apps run in an iFrame on the web page, which can lead to a variety of challenges (such as having an entire SharePoint page gray out when a modal dialog is displayed). And as you might expect, the App Store does not contain many apps at this point. The store is relatively new and third party developers want to see the value in creating apps before they dedicate time to creating them. We’ve talked about the highlights and this post should give you a basic understanding of the SharePoint App Model – you can learn more in this Microsoft article.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00455.warc.gz
CC-MAIN-2020-10
3,508
14
https://www.brighthub.com/mobile/google-android/articles/32489.aspx
code
- slide 1 of 5 The idea is to have a list of contacts, with the name and phone number, for example (but it can be whatever you want - the contact picture, the email address, etc.), when we touch a contact, a new screen will appear with full information about this contact, and with 2 options: Call him/her or send him/her an SMS. - slide 2 of 5 We will need two Activities, one for the Contact list and another with the information of the contact and the buttons with the “Send SMS" and “Call" Options. Getting the contacts information will be done using Content providers. And the “Send SMS" and “Call" functionality will be implemented using Intents. - slide 3 of 5 Structure and Classes We are going to work with two java classes. ContactList → This class will extend from ListActivity, this means that this activity holds a list of data, a ListView will be its screen layout. Here contact information will be displayed in a list form. ContactPage → This will be a normal Activity, here we will show the contact information and we will add two buttons to the “Call" and “Send" SMS functionality. About the resources we are going to use. mainlist.xml → This will be the layout we are going to use in the ContactList, it will just be a LinearLayout and a ListView. row.xml → This will just be a TextView. We will use this in one of the two different implementations we are going to work with. contact.xml → This will be a more complex page. It can be whatever we want. I mean, we can put the elements we want here, in the order or position we want. It is just to show information for the contact. Want the photo at the top? Or maybe at the left of the information? This will be your decision. One important thing, is to place two ImageButtons, one for the SMS and the other for Calling. We can search on the Internet, for some good icons, and use them in our ImageButtons in the contact.xml layout. We can add in our string.xml the static values we want. For example if in the contact page, we are going to put labels on the fields we are showing, it would be interesting to add these labels here, in string.xml instead of writing them by hand. Name: Jose B. Cortés The label “Name" can be placed in “string.xml" - slide 4 of 5 As I said before, we are going to use Content Providers to get the information from the Phone. From Content Providers, let's remember something. Cursor → This will be the class we are going to use to manage rows queried from the DB. Other important classes we are going to use, are Intents. An intent is an abstract description of an operation to be performed (ref: Google). So we are going to use them in 2 ways: -One of them is to open an Activity from another Activity, passing to it parameters. -Other way is saying to an Activity to perform an action, like “Call this number" or “Send a SMS to this person" - slide 5 of 5 Do you have any idea about how we are going to construct it? Think about it before the next lessons!! Dev Guide to Creating an Android Address Book In this 5-articles series we will see how to create simple applications, using the tools and knowledge we have acquiredfrom others articles.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210735.11/warc/CC-MAIN-20180816113217-20180816133217-00197.warc.gz
CC-MAIN-2018-34
3,177
30
http://www.ingentaconnect.com/content/mors/mor/2002/00000007/00000002/art00004
code
We claim that modeling of human decision making is an Achilles heel in military OR. The paper describes a simple model of an air campaign, formalized as a two-player zero-sum game with strategic uncertainty that features mixed strategy solutions. This game is used for testing game theory's ability to explain human decision-making and learning. Both conceptual and decision-making development as a function of experience with Campaign is measured, and the results indicate negative learning in both respects, except in cases where a deterministic solution exists. Military Operations Research is the leading peer-reviewed journal publishing articles in the fields that describe operations research (OR) methodologies and theories used in key military applications. MOR specifically invites papers that are significant military OR applications. Of particular interest are papers that present case studies showing innovative OR applications, apply OR to major policy issues, introduce interesting new problem areas, highlight educational issues, and document the history of military OR.
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380574.41/warc/CC-MAIN-20141119123300-00186-ip-10-235-23-156.ec2.internal.warc.gz
CC-MAIN-2014-49
1,085
4
https://forums2.cubiccastles.com/index.php?p=/discussion/15719/artifacts-some-info-please
code
For this discussion, I will refer to Cave Art, Fossils, Glyphs and Trilobytes as "artifacts" to make the typing easier. I have searched for this information and have not found it. Here is what I would like to know: 1. Must your pet be on the ground to discover an artifact, or may he be floating? 2. What is the area (in squares) in which the pet must be to sense the artifact (e.g. 3 squares x 3 squares)? 3. Can your pet sense an artifact if you are constantly moving or must you be relatively still (like mining a certain area)? 4. Is it possible that a mine might contain more than one artifact? 5. Is it possible that a mine might not contain any artifacts at all? Thank you for any help you can give. Please only respond or comment if you are sure of your answer, as I am looking for verified information, not guesses or suppositions. Also if you have experience that answers these questions, you might consider adding to the Wiki as there is very little information about pets and none that I could find about artifacts.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00309.warc.gz
CC-MAIN-2020-05
1,027
7
https://www.napari-hub.org/plugins/napari-gruvbox
code
Gruvbox theme for napari. Gruvbox theme for napari. Colors are taken from the palette in https://github.com/morhetz/gruvbox. You can install napari-gruvbox via pip: pip install napari-gruvbox To install latest development version : pip install git+https://github.com/brisvag/napari-gruvbox.git Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request. Distributed under the terms of the GNU GPL v3.0 license, "napari-gruvbox" is free and open source software If you encounter any problems, please file an issue along with a detailed description. - 06 March 2023 - 14 December 2022 - Information not submitted - Stars: 1 - Forks: 1 - Issues + PRs: 0
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00275.warc.gz
CC-MAIN-2023-40
734
16
https://ncigt.org/event/michael-jerosch-herold-phd-myocardial-tissue-characterization-magnetic-resonance-t1
code
Michael Jerosch-Herold, PhD Myocardial T1 mapping provides novel biomarkers of adverse myocardial tissue remodeling that have proven valuable for risk stratification. Most applications focus on the native T1 and, if T1 is also measured after extracellular contrast administration, on the extracellular volume fraction (ECV). Changes in T1 and ECV have been related to changes in myocardial water-homeostasis, edema, and build-up of diffuse interstitial fibrosis. It is generally assumed that for pre- and post-contrast T1 measurements the water exchanges at a fast rate between interstitial and extracellular spaces. Novel insights can be gained by measuring myocardial T1 early after contrast administration when this assumption breaks down and myocardial T1 becomes sensitive to the intra-cellular lifetime of water. Specifically, we show how the intracellular lifetime of water provides a measure of changes in cardiomyocyte diameter, discuss the validation of this novel marker of cardiomyocyte size, and present some applications of this biomarker in patient populations suffering adverse myocardial tissue remodeling, e.g. after chemotherapy. My undergraduate training is in Geophysics at the Karlsruhe Institute of Technology in Germany. I obtained my PhD is Solid State Physics in 1986 from Iowa State University, was a postdoc and lecturer at the Baylor College of Medicine (1986 - 1990), then a research fellow at Exxon Research & Development (1990-1993). Since 1993 I have held various academic positions at the University of Minnesota, Oregon Health & Science University, and finally HMS.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00411.warc.gz
CC-MAIN-2022-33
1,600
3
https://kb.metworx.com/Users/Tutorials/ssh-and-scp-workflow/
code
Connecting to Metworx via SSH and uploading/downloading files from workflow via ssh protocols This document walks through connecting to Metworx workflow via ssh, and several common scenarios to upload files to/from Metworx workflows. It is assumed that the user have previously generated public/private key pair to be used for authentication. Generating the key ssh key pairs is described here 2. Connecting to a Metworx Workflow with SSH For security reasons, authentication via ssh protocols is restricted to private/public keys, and password authentication is not allowed. To authenticate, the user must provide a private key for authentication, and contents of a corresponding ssh public key must be on the Metworx workflow, in the users home directory To determine the username and hostname of workflow to connect to, click on the workflow name on Metworx Dashboard and make a note of Username and Hostname . Again, you must have a private key that goes with the public key that is already on the Workflow in the 2.1. Connecting via SSH from a Windows Workstation On windows, a program called Putty can be used. It is assumed that Putty application is already installed on the Windows workstation and these are the steps to connect - Open Putty application. It will take you to the new session screen. - For hostname, paste or type in hostname noted in a previous step - Under Auth section of configuration, specify the full path to the private ssh key file - Click on "Open" to connect. When connecting to a host for a first time, you maybe alerted to accept the new servers host key -- just accept the warning. - When asked, specify the username. 2.2. Connecting via ssh from a Linux Server To ssh from a linux server to Metworx workflow, ssh command can be used Where: username: sergeyn hostname: i-00ade5c2f23f4102c.metworx.com % ssh -l sergeyn i-00ade5c2f23f4102c.metworx.com The authenticity of host 'i-00ade5c2f23f4102c.metworx.com (18.104.22.168)' can't be established. ECDSA key fingerprint is SHA256:6q6GLcSjPexrrOgyrncfMMCDrVKxXhre0qSCm9KofdM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'i-00ade5c2f23f4102c.metworx.com,22.214.171.124' (ECDSA) to the list of known hosts. Please note, it is assumed that the private ssh key file is in default location of ~/.ssh/id_rsa (the file was created in a previous step) 3. Uploading/Downloading files to Metworx Workflow 3.1. Windows Workstation On windows, you can use WinSCP application to copy files between your workstation to your workflow. - Open the WinSCP and and fill in hostname and username (leave password blank) . Leave other fields as default (i.e. SCP protocol) - Click on Advanced, and, in a screen that comes up, click on Authentication. Specify a full path to the private key file, and click OK. - Click Log in. - If a warning comes up about new host, accept it. You should now get a screen on which you can copy files between local and remote by dragging them right-clicking on the files. ON linux, scp command is commonly used to upload and download files. file.txt from a local current directory to /tmp on the Metworx workflow, you would run the following command on the local server: % scp file.txt [email protected]:/tmp file.txt 100% 1690 205.2KB/s 00:00 To copy entire /data/proj1 directory on the Metworx workflow to the current directory on the local unix server, you would run the following on the linux server scp -pr [email protected]:/data/proj1 . -pr options specifies to copy directory recursively AND preserves the timestamps on all files.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00094.warc.gz
CC-MAIN-2023-14
3,601
34
https://www.sql-server-performance.com/forum/threads/refreshing-table-with-fresh-data-3581/
code
SQL Server Performance Forum – Threads Archive refreshing table with fresh dataI’m having problem with refreshing a table with live data from another table on a live server. the table has foreign key references Without knowing what exactly your problems are I’d hazard a guess that you are unable to import some data because of foreign key contraints on tables. If you’re gonna import some data into a table that has such contraints you need a) turn off the constraints or b) make sure all the foreign keys exist in the other tables by refreshing those tables with data from the same server. I’d go for b) because you could end up in a mess with a) Cheers Shaun World Domination Through Superior Software the foreign keys exist in the development table but all i want to do is repopulate the development table with fresh data from same table in live. Just recreate the foreign keys after exporting data from live db to development db using ALTER TABLE… refer to BOL for more information. _________ what you are saying is i should delete the relationships in E.M, export the table from live and then recreate the table using alter table. How many tables are involved with this relationship? _________ error at destination for tow number 83979. errors encountered so far in this task:1. the statement has been terminated. violation of primary key constraint ‘pk__t_xxxxxx’. cannot insert duplicate key in object ‘t_xxxxxx. I get the above error message after disabling the constraints and then trying to dts the table from source to destination. Have you disabled referential integrity on development Db–table? _________ Then just disable on target database and try again. _________ You seem to having problems with both FK’s and PK’s while importing the data. Disable all Referential integrity (declarative or otherwise) and any Unique constraints you have in the table you are trying to refresh. By the way how are you refreshing the data ? DTS, BCP, BULK INSERT, INSERT…SELECT ? This might be helpful. Then just follow as referred. _________ what i’ve actually done is create a temp table, scripted out the development table, delete the relationships in E.M, dropped the table and then DTSed from live table and it seems to be fine. thanks all
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00694.warc.gz
CC-MAIN-2023-14
2,272
15
https://www.ruby-forum.com/t/rails-workshop-in-pune-sept-1-2-2006/66659
code
Subject: Rails Workshop in Pune, Sept 1-2, 2006 You might be interested in attenting this upcoming Rails workshop on the and 2nd Sept. It is being hosted by Shashank D., a Ruby & Rails Dibya P., who heads one of the few Rails-based development The contents/agenda look awesome and its a steal at Rs.8000. I might be to help out with some of the hands-on segments. REGISTER HERE: http://www.innovationworkshop.in/ See you there. Professional Ruby on Rails Development 2 day workshop for getting started with professional Rails development September 1st and 2nd, 2006 Introductory Fees: Rs. 8000/- per person Timing: 10:00AM to 6:00PM This is a practical hands-on 2 day workshop on Ruby on Rails that will get started with Rails with a solid foundation of the fundamentals of Rails. You do not need to know either Ruby or Rails to benefit from this workshop. The workshop will drive you through some hands-on coding as This is a quick start workshop for professional Ruby on Rails will set you in the right direction with just the right set of tips, ideas. We look beyond what is publicly available. We look at the version of Rails and the new features and let you know the benefits in of public releases. While this workshop is meant for developers, it will be an ideal technical introduction for Project Managers. Who can benefit from this workshop? Software application developers looking to start on Rails or already working on Rails will benefit a lot from this workshop to write more elegant Rails applications. This 2 day workshop will leave you all set to develop full fledged Rails web applications by leveraging the true benefits of Rails. If you already have some experience development web applications using other technologies, you will find it easier to see the differences and benefits of Rails. Knowing Ruby is not a must, but can be a plus to get rolling faster. Introduction to Ruby: ~~~~~~~~~~~~~~~~~~~ This will cover the basics of the Ruby language which is the foundation of the Rails Web framework. * It will touch the salient features of the language, which make developing and extending the framework a joyful experience. * It will also introduce you to some select libraries and tools available with Ruby. * Hour 0: Ruby Basics * Hour 1: Ruby Blocks * Hour 2: Ruby Object Model * Hour 3: Ruby Dynamics * Hour 4: Ruby Libraries * Hour 5: Ruby Tools * Hour 6: Case Study * Hour 7: Case Study continued Introduction to Ruby on Rails: ~~~~~~~~~~~~~~~~~~~~~~~~ This will cover the basics of the Web framework and develop a small Web 2.0 application from inception to end. * It will introduce you to the MVC architecture which forms the basis of the framework. * The framework is made up of five components: ActiveRecord, ActiveSupport, ActionPack, ActionMailer, ActionWebServices. * We will work on all these components to create a simple yet sophisticated Web application. * Hour 0: MVC Architecture Basics * Hour 1: ORM Basics: ActiveRecord * Hour 2: ActionPack = ActionView+ActionController * Hour 4: ActiveSupport+ActionMailer+ActionWebServices * Hour 5: RJS * Hour 6: Case Study * Hour 7: Case Study continued -- rm -rf / 2>/dev/null - http://null.in Dont judge those who try and fail, judge those who fail to try..
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100705.19/warc/CC-MAIN-20231207221604-20231208011604-00136.warc.gz
CC-MAIN-2023-50
3,239
25
https://pcpartpicker.com/forums/topic/103289-best-650watts-psu-under-65us
code
none. the rosewill arc 650 and thermaltake smart 650 should both be around $65, but neither are what I'd call a "best" bargain. the cheapest really good 650 watt is the XFX TS 650 bronze for $70. in this situation it's important to examine your needs again, because if you don't actually need a 650 watt psu, get a better, lower watt psu. Well..i will oc my gpu (390) and i think it needs at least a 650watts psu I'll agree there. with a power hungry gou it's even more important to buy a quality psu. None, there aren't any good 650 watt psu's for $65 or under. You can choose this one
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00338.warc.gz
CC-MAIN-2020-16
586
5
https://communities.sas.com/t5/SAS-Procedures/How-to-export-the-results-of-PROC-LOGISTIC-to-excel-file/td-p/420697
code
I am using PROC LOGISTIC on multiple datasets , for this I am using macro. Is there a way so I can export the results to excel sheet for each dataset. I tried using ODS TAGSETS.Excelxp, but not working: ods graphics on; proc logistic data=&in_ds. ALPHA=0.05 outmodel=DS_MDL; class var1 var2 var3; model y=x / options; score data=&in_ds. out=ds_scr; ods graphics off; OPTIONS ( Orientation = 'landscape' FitToPage = 'yes' Pages_FitWidth = '1' Pages_FitHeight = '100' ); ODS TAGSETS.EXCELXP close; I want to create separate reports for 3 datasets, the report name can be passed as a parameter to macro. Use the dark and forgotten art of by group processing. ods tagsets.excelxp file='d:/worksas9regression.xml' style=minimal options(orientation='landscape' fittopage='yes' pages_fitwidth='1' pages_fitheight='100'); data have; length ds_name $20; set dataset1 dataset2 dataset3 indsname=tmp; ds_name=tmp; run; ods graphics on; proc logistic data=have alpha=0.05 outtmodel=ds_mdl; by ds_name; class var1 var2 var3; model y=x / options; score data=have; run; ods graphics off; ods tagsets.excelxp close; Do note I have corrected the various typos, the random casing, lack of indents etc. I also called the filename xml which is what you are actually creating here by use of a tagset. Also note in the tagset options you may need to set sheet_inteval='bygroup'. As I have nothing to test this on it isn't tested. Pleas post some test data, and code which accurately reflect what you have, we can only go on what you post here. Your error is to do with gpath: If the datasets are different and the variables are different, how are you aiming to get the different variables into the macro code? If the same variables are being used in the model each time, just keep them in the dataset. If they are different variables then show how this will go into the macro. %macro build_model(in_dset); ods graphics on; proc logistic data=&in_dset. ALPHA=0.05 /*&model_plots.*/ outmodel=ds_MDL; class &list_class_vars.; model DEFAULT_FLAG(EVENT='1')=&list_loan_vars. &selected_macroeconomic_vars. / &model_opts.; score data=&in_dset. out=ds_SCR; run; ods graphics off; %mend build_model; %let plot_path = /home/../../../reports; ods tagsets.ExcelXP file="&plot_path./report.xml" style=minimal options(orientation='landscape' FittoPage = 'yes' Pages_FitWidth ='1' Pages_FitHeight = '100'); %build_model(dataset1); ods tagsets.ExcelXP close; error message: NOTE: Convergence criterion (GCONV=1E-8) satisfied in Step 17. NOTE: Convergence criterion (GCONV=1E-8) satisfied in Step 18. NOTE: Convergence criterion (GCONV=1E-8) satisfied in Step 19. NOTE: Convergence criterion (GCONV=1E-8) satisfied in Step 20. WARNING: GPATH or PATH is not a writable directory. It will be ignored. ERROR: Cannot write image to . Please ensure that proper disk permissions are set. ERROR: Cannot write image to . Please ensure that proper disk permissions are set. NOTE: The SAS System stopped processing this step because of errors. Please find my code and error messages. Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023. If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video. Find more tutorials on the SAS Users YouTube channel.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00373.warc.gz
CC-MAIN-2023-50
3,607
26
https://community.spiceworks.com/topic/460425-outlook-2013-calendar-asking-for-password
code
Multiple users are sharing a calendar that was created by an employee that was recently let go. This person's AD account has been disabled, etc. When the account was disabled is when the users sharing the calendar with this person starting getting a password popup window wanting them to enter their password. (I guess it's wanting them to authenticate to a calendar that has been disabled) I tried naming another user the owner of the affected calendars, however the users still get this password pop-up when clicking on the calendars of the disabled users account. I'd rather not have someone to re-create the calendar as it would be time consuming any ideas on how to get these calendars moved from the disabled user's account? edit: We host our own exchange 2010 client outlook 2013
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00336.warc.gz
CC-MAIN-2022-21
786
6
https://docs.3box.io/build/web-apps/messaging
code
Threads are feeds consisting of linked and timestamped messages that enable decentralized peer-to-peer communication between one or more users by allowing these users to post messages in a sequence. Threads are great for adding commenting, chat, messaging, personal feeds, and content streams to your application. They are also great for sharing information between users. Threads are available as either persistent threads, where messages are available in a persistent OrbitDB feed store unless explicitly removed (by the author or a moderator), or ghost threads, where messages are not persisted in a database but whose history is kept in-memory by online peers and can be requested by new users. For ghost threads, if all peers go offline then messages disappear. DAO Proposal Systems Public following or contact lists Sharing data between users Getting Started with 3Box Threads This section describes how to perform various interactive functionalities on a 3Box thread, including creating a new thread, joining a thread, posting messages to a thread, adding moderators, and more. To perform these actions, you must first authenticate the space. If you only need to display public thread data for persistent threads, you can use the static read-only get methods described here.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00135.warc.gz
CC-MAIN-2020-05
1,281
9
https://discourse.paraview.org/t/paraview-5-7-mac-crashing/3241
code
I am using Paraview 5.7 on the Mac and it reliably crashes when opening the following: This is the state file: multiblock_scalar.pvsm (513.6 KB) which loads the following data: Archive.zip (19.4 KB) To make it crash, load the state file (with the data bs…pvtu) then try to display the point probe time history. I think this may have something to do with the fact that some of the numbers in the data files are small i.e. 1.0e-35 (which is my own issue), but this shouldn’t cause Paraview to crash in any case. I don’t know the underlying cause, but I get a crash when loading this state file and showing the point probe history with the following stack trace: #0 0x12bbff1f0 in vtkDataSetAttributes::CopyData(vtkDataSetAttributes*, long long, long long) vtkDataSetAttributes.cxx:817 #1 0x123a0dd17 in vtkExtractDataArraysOverTime::vtkInternal::AddTimeStepInternal(unsigned int, int, double, vtkDataObject*) vtkExtractDataArraysOverTime.cxx:569 #2 0x123a0d241 in vtkExtractDataArraysOverTime::vtkInternal::AddTimeStep(int, double, vtkDataObject*) vtkExtractDataArraysOverTime.cxx:320 #3 0x123a17a2f in vtkExtractDataArraysOverTime::RequestData(vtkInformation*, vtkInformationVector**, vtkInformationVector*) vtkExtractDataArraysOverTime.cxx:777 Well. At least it is reproducible. I read another thread where numbers with a small negative exponent were causing crashes. I replaced everything with an exponent more negative than e-16 and the crash went away. Before I did that I was getting an error: ERROR: In /Users/kitware/dashboards/buildbot-slave/a64f5607/build/superbuild/paraview/src/VTK/IO/XML/vtkXMLUnstructuredDataReader.cxx, line 527 vtkXMLUnstructuredGridReader (0x7ff50e906980): Cannot read points array from Points in piece 0. The data array in the element may be too short.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00470.warc.gz
CC-MAIN-2022-21
1,791
15
https://pbworks.net/multilanguage-mvc4-website/
code
So you want to create a multilingual project in MVC4 or MVC5? You are new to ASP.NET MVC and you are looking for a working solution, easy to understand and easy to implement? You are in the right place. This solution has been used for a couple of live websites and it worked quite well. Also the Frontend Developer that worked with me was quite happy with the solution. Read on to understand why. You will find online a couple of tutorials and approaches. One I liked and followed is Creating a Bilingual ASP.NET MVC 3 Application – Part 2: it offers a working culture detection based on the user browser's settings and it allows the user to browse the site also by entering in the url -just after the domain name- the language abbreviation (like this: http://www.example.com/en/Home/Index ). The solution in the above tutorial allows the developer to save the translations in .resx files (same principle for LocalResources and GlobalResources). However, the Cultural View Engine for MVC tutorial (or better: Custom View Engine for Localized Views with ASP.NET MVC Razor - for razor cshtml pages) offers the same browsing routes and the possibility to save the translations directly in the view files, making the editing much more user friendly than editing the .resx files in Visual Studio (which, for long and formatted texts, is not user friendly at all). When I studied these 2 approaches/tutorials, I decided to mix them, making the developer life much easier and flexible. Imagine the following scenario: - Global.resx file for all the common translations for classic UI labels, buttons, tooltips etc. - a default ~/Views folder containing the main views and partial views (e.g. Index.cshtml, About.cshtml, _Header.cshtml, _Footer.cshtml) with localized content taken from the .resx files, like this @Global.FooterText - some views containing the content not as variable like @Global.FooterText but copied and pasted there directly e.g. "Copyright Foo Bar 2013", saved e.g. in Example.cshtml - the same views as above, with the same name as above and a language suffix e.g. Example.de.cshtml and content translated directly in that file (in this example, "de" is for German) - Visual Studio Community 2013 (it's like the "Pro" version, but for FREE! See license details) - Download and install MVC 4 (not needed if you are using Visual Studio 2013, you probably use MVC5) - Download and install the NuGet package manager plugin for Visual Studio (not needed if you have Visual Studio 2013, it should be installed already) - Configure Visual Studio to restore the NuGet packages automatically - MultilanguageMVC4-2013.10.07a.zip (use this as reference only. Create your own project in Visual Studio 2013 in MVC5 to have all the updated packages and dependencies, bootstrap etc.) - Update 2016-05-13 I didn't check it but you may have a look at Xavier educa02 on github (repository based on this work) Note: in the Areas you can't have a controller with the same name of a standard controller (or vice-versa). Every controller must have a unique name. You may find these 2 helpers quite handy (I use GetLanguage in my controllers when I need to process contents based on the language, in example to build an URL I have to insert in a mail sent to the user, or similar things): To use/implement this solution in your own project in your solution you need about 5 Minutes (+ a few more minutes to add new languages or remove the ones you do not want to support). Just follow this simple procedure: - copy all the files you find under "Code" and put them in a separate class library (and add a reference to it), or copy the "Code" folder directly in your project. - Check the modifications to the global.asax.cs file and apply them to your global.asax.cs file (and update your default route in case it's not Home/Index). - If you want more languages or remove some from the default ones I added (en, de, fr, it) you must edit the CultureManager.cs file. Check the constants at the top and the method InitializeSupportedCultures(). If you need more precision in the culture (in my case I use only the 2 letters ISO abbreviation) you may do some refactoring. My solution is based on these 2 tutorials: - Creating a Bilingual ASP.NET MVC 3 Application – Part 2 - Custom View Engine for Localized Views with ASP.NET MVC Razor I downloaded and updated the first tutorial solution and added the LocalizedViewEngine.cs from the 2nd tutorial. Then updated the global.asax.cs file accordingly. As some people asked for it, I added also an example with Areas. If you don't need areas, simply delete the "Areas" folder, save all and re-compile. Seems that this blog post is still helpful. Good news, this solution works of course also with MVC5. Don't update my demo project to MVC5, start a fresh one and follow the procedure to add multi-language views support. Added some more useful information on how to use this solution. Hopefully it should be quite easy to implement this in your solution. As some people asked for it, I added an example with Areas. I added an Area called "Admin", a controller called "Account" and 2 test Views: Index and Test. I added a localized Index.de and for Test I used the .resx resources. To register the new area and make it work with my multi language support, see Areas\\Admin\\AdminAreaRegistration.cs. Hope this helps. I'm not 100% about this, but my solution can lead to caching problems. Symptom: after changing the language to xx I keep seeing the contents of my main View PageFoo.cshtml and not the contents in PageFoo.xx.cshtml. Solution: I needed -for my production website- to keep the debug option in the web.config file, more precisely in the tranform web.config.live in Visual Studio I had to keep this (as I use the publish function): <compilation xdt:Transform="RemoveAttributes(debug)" /> If you don't have a transform file, then the option in the web.config file has to be: <compilation debug="true" targetFramework="4.0" />
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00659.warc.gz
CC-MAIN-2024-18
5,972
35
http://forum.pagelines.com/topic/800-changed-to-php-version-5-but-theme-still-complaining/?k=880ea6a14ea49e853634fbdc5015a024&setlanguage=1&langurlbits=topic/800-changed-to-php-version-5-but-theme-still-complaining/&langid=1
code
I'm trying the basic Whitehouse theme with WP 2.9. The options page is giving this warning: "You are using PHP version 4.3.11. Version 5 or higher is required for this theme to work correctly. Please check with your host about upgrading to a newer version." I've had my host upgrade my account to use PHP 5, but the warning continues to appear. Is there some way to cause the wp /theme to retest the php version to detect that ver 5 is being used? I've refreshed the browser page, logged out and logged back into wp admin. No go. I suspect this preventing me from updating Whitehouse options like the footer. Pasting GA snippet and saving options doesn't seem to save the pasted code. I've check using the appeareance editor to look at the footer code as well as viewing source of served blog page.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163997135/warc/CC-MAIN-20131204133317-00070-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
798
9
https://virtual.keystonesymposia.org/articles/1449/view
code
Spatial Transcriptomics analysis of the ALS spinal cord Silas Maniatis1, Sanja Vickovic2, Tarmo Aijo3, Dayanne Martins de Castro3,5, Richard Bonneau4,5, Joakim Lundeberg2, Hemali Phatnani1 1New York Genome Center, New York, NY, USA; 2Science for Life Laboratory, Division of Gene Technology, KTH Royal Institute of Technology, Stockholm, Sweden; 3Simons Center for Data Analysis, New York, NY, USA; 4Center for Computational Biology, Flatiron Institute, New York, NY, USA; 5Departments of Biology and Computer Science, Center for Genomics and Systems Biology, New York University, New York, NY, USA In ALS, symptoms typically appear first in a single limb and subsequently spread, ultimately leading to complete paralysis. Mounting evidence suggests that ALS pathology involves dysfunction of both motor neurons and glia. This implies that dysregulated intercellular signaling contributes to the disease. Understanding the cartography of gene expression in the spinal cord as ALS progresses will provide insight into the molecular basis of each cell type’s contribution to the disease, and how events initiated in one cell type or one region of the spinal cord ultimately lead to widespread MN loss. In the work presented here, we use a novel method for spatially resolved RNAseq, termed Spatial Transcriptomics, to identify gene expression programs associated with the initiation and spread of ALS pathology. When combined with computational tools that we have developed for Spatial Transcriptomics data analysis, our data reveal previously unknown changes in gene expression related to ALS disease state in the SOD1-G93A mouse spinal cord. These changes occur in multiple cell types, and appear first in ventral regions. We validate these findings and set them in the context of previously identified ALS disease mechanisms using FISH and IF. As Spatial Transcriptomics does not require genetic manipulation, we are able to apply the same methodology to post-mortem spinal cords from human ALS patients. Our results demonstrate the power of spatially resolved, transcriptome wide gene expression analysis for understanding the molecular basis of neurodegenerative disease.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474663.47/warc/CC-MAIN-20240226194006-20240226224006-00069.warc.gz
CC-MAIN-2024-10
2,176
4
https://support.rallybound.com/display/KB/Graphics+tab
code
The graphics tab stores the social images for your app. You can upload/edit/delete images from this tab. To add a new image : 1) Click the plus button in the left corner. 2) Select the type of image you are uploading (facebook/linkedin share). 3) The dimensions of the images are given on the pop-up. 4) Click on the big plus icon next to the Graphic field. 5) Upload the image and click Save. To edit existing image : 1) Click on the image from the list 2) Hover over the image 3) It will display a minus button to the top left of the image 4) Click the minus button, this will delete the image 5) Then upload a new one in its place.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00423.warc.gz
CC-MAIN-2020-05
634
13
http://docs.daz3d.com/doku.php/artzone/pub/software/dazstudio/reference/prefsif
code
To set the Interface Preferences, choose Edit > Preferences to open the Preferences dialog and click on the Interface tab to enable it. The Interface tab has the following options: Show Tool Tips: Check the Show Tool Tips checkbox to display information about a tool when you hover over it with your mouse cursor. Clear this checkbox to disable this feature. Activity Image: Allows you to customize the image that appears in the right portion of the interface. By default, a grayscale image of Victoria 4 appears. You can also choose None, a Dragon, Trees, or Cycle (which cycles through all available images). Open GL: If your card cannot support OpenGL, a notice informs you as such. If your card supports OpenGL, the following options will be displayed: Manipulation Draw Style: The Manipulation Draw Style option allows you to conserve computing resources by displaying scene elements in reduced detail while you are manipulating (posing or moving) an object, camera, or light. There are three options: Choose Off to display the objects in your scene as they normally appear (with full geometry shape and textures) while you manipulate them. This option consumes the most computer resources. Choose Wireframe Box to display your objects in wireframe mode while you manipulate them. This option consumes the least amount of computer resources. Choose Smooth-Shaded Box to display your objects as smooth boxes while you manipulate them. Bounding Box options: These options determine how the bounding boxes for selected objects are displayed. Edge Length: An edge length of 1 will display the edges of the bounding boxes as solid from corner to corner. Lowering this amount will shorten the edges. Setting this value to 0 will mean that no bounding boxes will be displayed. Active Opacity: Sets the opacity of the edges of the bounding box for active selections. A setting of 1 will display the edges at 100% opacity while lowering this setting will lower the opacity. Inactive Opacity: Sets the opacity of the edges of the bounding box for inactive selections. A setting of 1 will display the edges at 100% opacity while lowering this setting will lower the opacity. Manipulation SubDivision: Click the option box to toggle this option Off or On. When On, this option will automatically switch to the zero subdivision level when a Sub-D figure is manipulated. When Off, the figure will remain at the current subdivision level when it is manipulated. Hardware Anti-Aliasing: Click the option box to toggle this option Off or On. When On, this option transfers all the edge smoothing actions to the video card instead of to your CPU. You may want to turn this option Off if your video card supports OpenGL acceleration but is below the recommended requirements. Display Optimization: By default, this option is set to Off since not all video cards can handle the display optimization. When set to On, the optimization stores more of the loaded geometry in the memory of your video card in order to speed up the display. OpenGL Texture Quality: This option controls the quality of the textures in your Viewport. Drag the slider from left to right, or use the right and left arrow keys, to use the setting that is best for your system resources. Slider options, from left to right, are Best Performance, Good Performance, High Quality, and Best Quality. Pixel Buffer: Provides a secondary buffer for rendering with 3D Bridge and by some hardware renderers where a viewport is obscured by another window while rendering. Not required unless you are using 3D Bridge with Photoshop and some hardware configurations may become unstable when pixel buffers are allocated. Toggle the buffer On or Off as needed and when it is On, use the slider to set the buffer size. About Current Hardware: Click this button to view a list of features that are supported by your OpenGL-compliant video card. The list includes the OpenGL version supported by your card, and displays its manufacturer’s name and model number. A list of supported features then follow, including limitations on the number of lights, texture units, and maximum texture size that the card can support. Restore Factory Defaults: Click this button to remove all of your custom settings and return to the default settings furnished with your DAZ Studio installation. This will reset all startup dialogs and remove content directories from the preferences. After you complete your Interface Preferences settings, use one of the three buttons at the bottom of the dialog: Click Apply to set the settings for the current preference tab; the Preferences dialog remains open so that you can set other preferences. Click Accept to exit the dialog immediately. Click Cancel to exit the Preferences dialog and discard any unsaved changes.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00471.warc.gz
CC-MAIN-2017-51
4,784
21
https://moneytalkwitht.com/podcast/i-switched-careers-so-can-you-heres-how-ep-251/
code
Need a career change? Join Tiffany Grant on her podcast to learn tips and strategies for successfully changing careers. Hear from Tiffany about the importance of identifying transferable skills, finding the right resources, and how to balance financial security while taking this leap in your professional life. She’ll also discuss potential challenges and ways to tackle them with courage. Get ready for some inspiring advice that’ll help you pursue your dream career! Every Tuesday, Tiffany answers one of your submitted questions. To submit a question for an upcoming episode, visit here: https://www.moneytalkwitht.com/asktiffany If you need help talking through your career pivot, I would be happy to help! Be sure to book a consultation to see how I can help: https://academy.moneytalkwitht.com/15-minute-consultation Visit our website for more career tools and resources: https://moneytalkwitht.com
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00277.warc.gz
CC-MAIN-2024-18
909
5
https://cedrus.com/lumina/controller.htm
code
Up to 10 Keys Use two response pads simultaneously, with up to five keys each. It’s like getting a free I/O device that can send 8 bits of TTL output. Sync With Scanner Compatible with GE, Siemens, and Phillips MRI scanner triggers Built-in high resolution timer that can be reset in three different ways. Universal Software Support USB “keyboard mode” makes your controller work with any software package. Light Sensor Support Record precisely the onset of visual stimuli and even have it reset the built-in timer. With this feature, your computer thinks that a second USB keyboard is plugged in, allowing the controller to work with any software. Broad Application Support A number of software packages know how to communicate directly with the Lumina controller and take advantage of its features: The commands that the Lumina controller accepts are public. Cedrus publishes and maintains Python and C++ code libraries. The built-in timer in the Lumina controller measures with precision when the participant presses (or releases) a key, then sends time-stamped information to the computer. Just as important, the timer can be reset in one of three ways: via a command sent over USB, when the light sensor detects the onset of a visual stimulus, or via an external device.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496694.82/warc/CC-MAIN-20190220210649-20190220232649-00136.warc.gz
CC-MAIN-2019-09
1,281
16
https://angel.co/yip-eric-gmail-com
code
Co Founded Track Revenue in 2015. Inventor of 25+ patents in the areas of mobile, GPS, QoS, and data collection. MS Computer Science UCSD. • Web Tech: AWS, Azure, Distributed Computing, Complex Data Pipelines, Machine Learning • Mobile: Android, Windows Phone, Qualcomm AMSS, GPS • Databases: MySQL, Mongo DB, Redshift, Postgresql Current CEO and co-founder of Egge. Former PhD student @university-of-california-san-diego; NSF/SIO IGERT fellow; co-founder of PubKnowledge
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00303.warc.gz
CC-MAIN-2020-10
477
7
http://www.scandal-heaven.com/t838-if-you-were-to-be-a-thing-that-one-of-the-scandal-girls-uses
code
if you were to be an item/thing that the girls of SCANDAL uses.. what do you want to be? and why? for me... i wanna be Haruna's iPhone!!!!! the reason is simple.. i can always capture her beauty and... ill have myself equipped w/ auto delete/cancel function > if a msg or call from a suitor arrives ill automatically reject it!! EDIT: don't say things like "i wanna be her underwear" OK? xDDDD Last edited by haruhiyuu on Tue Apr 20, 2010 6:42 pm; edited 3 times in total
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00536-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
471
6
http://forums.zimbra.com/migration/27303-importing-multiple-pst-files-using-windows-cli-utility.html
code
I'm following wiki guide to import multiple PST to different account from exchange. I have exported pst file for each accout on our Exchange 2003 server. I write a configuration.xml file to import each pst file in zimbra server in new user account . The problem is that when I run "C:\Export\ZCSPSTImportWizard-5.0.10_GA_2638.exe C:\Export\configuration.xml" from Run window, the application import only the first pst configurated with the tag "Import" in xml file. I attach a test xml file. Thanks in advance
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00087-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
509
6
https://forum.greenbone.net/t/cve-2021-39226-vulnerability-scan-doesnt-detect-the-vulnerability/15078
code
I just installed the free edition of Greenbone vulnerabilities scanner. I just test it against a test web server and, after some hours, I received the results I needed. It was a very long process, but it satisfied my needs. Recently, my ISP reported a vulnerability (CVE-2021-39226) related to one of our public internet services. So, in order to create a vulnerabilities report before and after applying the needed fix, I would like to create a dedicated scanner (in this way I will reduce the scanning time as well). This is what I have done: I cloned the “empty” scanner template; I customised it selecting only the “Grafana 2.0.1 < 7.5.11, 8.x < 8.1.6 Snapshot Authentication Bypass Vulnerability (GHSA-69j6-29vr-p3j9)” check from “Web application abuses” Unfortunately, the task I created (using OpenVAS Scanner + the customised “empty” template) is not able to find the vulnerability affecting the test server. What I’m doing wrong? Could you please help me to complete correctly the scanner configuration? I think the “Base” scan config is a better starting point to clone than the “Empty” scan config since the comment for it is “Basic configuration template with a minimum set of NVTs required for a scan”. From there you should be able to add only the single VT you wanted: “Grafana 2.0.1 < 7.5.11, 8.x < 8.1.6 Snapshot Authentication Bypass Vulnerability (GHSA-69j6-29vr-p3j9)”.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00408.warc.gz
CC-MAIN-2023-50
1,423
12
https://community.canvaslms.com/t5/Canvas-Question-Forum/Can-cavas-import-quiz-from-a-TST-file/m-p/214877
code
I have Examview questions which are saved in the file format of TST (for example I have a test named moons.tst). Is there a way to import these questions into a new canvas bank to use to build a quiz? Go to Solution. Hi @rauchs , and @rauchs Welcome to the Canvas Community! There are some discussions about ExamView that I will point you to. View solution in original post
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00661.warc.gz
CC-MAIN-2021-39
373
5
https://rdrr.io/rforge/Umpire/man/e00-Umpire-package.html
code
A suite of microarray simulation software which includes additive and multiplicative noise, mixture of expressed and unexpressed genes, and uses statistical distributions to capture differences in mean expression and in standard deviation both within groups and between groups of samples. Finally, it incorporates a simple block correlation structure between genes. For a complete list of functions, use library(help = 'Umpire'). Zhang J, Coombes KR. Sources of variation in false discovery rate estimation include sample size, correlation, and inherent differences between groups. BMC Bioinformatics. 2012; 13 Suppl 13:S1. Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647768.45/warc/CC-MAIN-20180322034041-20180322054041-00657.warc.gz
CC-MAIN-2018-13
740
8
https://www.ornl.gov/staff-profile/stacy-j-prowell
code
Dr. Stacy Prowell is interested in the security and resiliency of critical infrastructure. Dr. Prowell's work on a system for deep analysis of compiled software led to the Hyperion system, which received a 2015 R&D 100 award and two awards for technology transfer. Dr. Prowell helped to create the initial cybersecurity research group at ORNL, serving as Chief Cyber Security Research Scientist, and also helped focus the group on critical infrastructure, serving as Program Manager for the lab's Cybersecurity for Energy Delivery Systems (CEDS) program under which the lab received more DOE CEDS funding than any other national laboratory. Previously, Dr. Prowell worked in the CERT Division of the Software Engineering Institute on automated analysis of malware. Dr. Prowell is an IEEE Distinguished Lecturer for the Transportation Electrification Community. Dr. Prowell is a member of AAAS, Sigma Xi, and a senior member of the IEEE. Dr. Prowell is a faculty member in the Department of Computer Science at Tennessee Technological University where he teaches CSC 6580, advanced and automated reverse engineering. His most recent lecture series from this class is available online. Starting at the end of 2022, Dr. Prowell is the Associate Director for Tennessee Tech's Cybersecurity Education, Research, & Outreach Center (CEROC). Selected as an IEEE Distinguished Lecturer by the Transportation Electrification Community (2016) 2016 ORNL Lab Director "Best SEED Money Fund Poster" Award, with Jeff Nichols, Bobby Bridges, and Jarylin Hernández 2016 Federal Laboratory Consortium Excellence in Technology Transfer Award, with David Sims 2015 R&D 100 Award, for Hyperion 2015 UT-Battelle Technology Commercialization Award, for Hyperion Team 2013 UT-Battelle Significant Event Award Trademarks and Patents S. J. Prowell, “Performing Hierarchical Analysis of Markov Chain Usage Models,” US Patent 7,219,049, filed September 15, 2003. S. J. Prowell and C. Rathgeb, “Statistical Fingerprinting for Malware Detection and Classification,” US Patent 9,135,440, filed July 31, 2013. P. Evans, N. Paul, S. Prowell, “System and Method for Key Generation in Security Tokens,” US Patent 9,172,698, filed October 11, 2013. S. J. Prowell and K. D. Sayre, “Automated Clustering of Malware Variants Based on Structured Control Flow,” Provisional Patent #62/170,758, filed June 4, 2015.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.15/warc/CC-MAIN-20231130000127-20231130030127-00079.warc.gz
CC-MAIN-2023-50
2,389
16
https://im-an-economist.blogspot.com/2021/10/nobel-prize-for-causal-inference-why-it.html
code
Nobel prize for causal inference: why it matters This year's Nobel prize in economics was awarded to three brilliant economists, David Card, Joshua Angrist, and Guido Imbens for revolutionizing the way economists (and social scientists) do empirical research. Specifically, Card got it for his contributions to labor economics, and Angrist and Imbens got it for causal inference, but all three made breakthrough contributions of applying the scientific method to economics. In the field, we call it the "credibility revolution". I am very familiar with the work of all three as I've used their papers very often while learning about causal inference, teaching it, and citing it in my own empirical research. I also had the honor of receiving comments on one of my papers (the recently published Politics of Bailouts) from Josh Angrist at a conference I co-organized. The way I wish to pay respects to the three of them is by explaining to you, my dear reader, why this Nobel prize in particular deconstructs that typical malevolent narrative that the economics Nobel is not really a Nobel, but rather a prize of the Swedish central bank in memory of Alfred Nobel. Why? Because this prize (like many in the past 20 years) awards empirical economists who made groundbreaking contributions in applying the true scientific method to the field. With their work, and the work of many others following in their footsteps, the field began to slowly progress from the economic orthodoxy of the pre-1990s to a much more developed field that actually has the right to start calling itself a science. For example, the Card and Krueger (1994) paper, cited by the Nobel committee for its brilliant contribution to the field, was a state-of-the-art paper when it was published. Truly the pinnacle of the field back then. Twenty-something years later, we used it in our PhD causal inference class to pluck holes through it. We were given the data and asked to find all the errors in the paper, and explain why something like this wouldn't pass the review process today. Imagine that. Such a brilliant contribution back then wouldn't even be published today. That's how far economics (and political science, and the social sciences in general) has come today. Pure theory papers are rare. Doing some back of the envelope regressions of say, economic growth against a host of variables to see which one has a significant impact - a standard in the 1980s for example - is laughable today. Today, it is almost impossible to get an econ PhD from a decent University (say, top 400-500), without applying causal inference in your paper. Economists who still do simple regressions (or worse, elaborate their ideas without any data or proof) are by no means scientists. Economics will never be physics (even though economists love to use complex math to mimic physics). It is more likely to follow in the footsteps of medicine and psychology (behavioral economics in particular does that), by using randomized experiments or even quasi-experiments with some random variation that enables the researcher to chose treatment and control groups. Why does all this matter so much? Because of the implications this new way of doing economic research will bring forward to economic policy. Thus far, the economic orthodoxy based on old theories has driven many policy conclusions. Some of the old theories were, in fact, proven correct by empirical research, but many were challenged. This has yet to be reflected in economic policy. An additional problem with economic policy is the political arena where such policies are being devised. That's why I do my empirical research in political economics, to understand how political interests shape and discourage beneficial economic policies. Sometimes it's not just about the research findings. But good research findings, based on good empirical design are still essential. It won't be long before politicians will no longer be able to dismiss them with the typical "I need a one-handed economist" argument. So what is causal inference and why does it matter? If we could summarize causal inference in one cliche it would be: correlation does imply causality. How do you prove anything in social science? For example, that action A actually causes outcome B? For example, do better grades in school lead to higher incomes in life? If you just look at the data (below), the relationship is clearly linear and positive. But does it make it causal? No! There can be a host of unobserved factors that might affect both grades in school and salaries later in life. Like ability. Competent individuals tend to get better grades, and have higher salaries. It wasn't the grades in that caused their salaries to be higher, it was intrinsic ability. This is called an omitted variable bias - an issue that arises when you try to explain cause and effect without taking into consideration all the potential factors that could have affected the outcome (mostly because they were unobservable). Ideally we would need a counterfactual: what would the outcome be if history played out differently. What would my salary be if I didn’t go to college? Or in the absence of having metaphysical powers, we could compare individuals with different outcomes and grades, but everything else being the same. Or we could compare twins. Genetically identical, same upbringing, same income, etc. Give one a distinction grade and the other a non-distinction, and see how they end up in life. Problem is, we cannot really interfere with people's lives just for the sake of proving a point. An alternative is to simply match students into comparable groups based on all of their pre-observed characteristics: gender, parental income, parental education, previous school performance, etc. Problem with this is that we can only match students based on things we can observe. We still cannot observe innate ability. The best way to prove causality in this case, as uncovered by our Nobel winners, is to first ensure that there is random assignment into treated and controlled groups. This is essential. Why? Because randomization implies statistical independence. When we randomly pick who will be in the treatment and who will be in the control group, we make sure that the people in each group are statistically indistinguishable one from another. That way any difference in outcomes between the two groups should be a result of the treatment (in this case better grades), and nothing else. Angrist did this incredibly with taking random assignmentof Vietnam war veterans (through the random draft lottery) to see how the experience of war of a randomly selected group affected their incomes later in life, compared to their peers who were lucky enough to avoid the draft. Angrist and Krueger did it by looking at people born in firstand final quarter of a year (obvious random assignment), where those born in Q1 have worse education and income outcomes to those born in Q4. But what if we cannot randomly assign by birth or lottery? Then we need a trick, something that generates an as-good-as random assignment; a natural threshold of some sort. Using our grades example, in the UK a distinction threshold is 70 (first honours). If you get just marginally below, 69, you get a second.Comparing someone with 75 to someone with 60 is no good; there will obviously be differences. But 70 to 69 are likely to be very similar, with one being slightly luckier. So we compare only students between 69 and 71, where those above 70 are the treatment, and those below are the control. If there is a large enough jump, a discontinuous jump over 70 threshold where those awarded a distinction have statistically significant higher earnings than those who just barely failed to make it to the distinction grade, then we can conclude that better grades cause higher salaries. If not, if there is no jump and the relationship remains linear, then we cannot make this inference. This design is called a regression discontinuity design (RDD). Imbens excelled at this in particular. Card’s most famous contribution (with Krueger) was to use differences in differences (DID) to show how minimum wage increases impact employment. They used fast food restaurants in NJ and PA, after NJ increased the minimum wage to show that employment actually went up, not down. As mentioned earlier, this particular paper, state-of-the-art at the time, did not satisfy the parallel trends assumption for DID design (employment trends should have been identical in both states prior to the policy change and changed only in NJ afterwards), but Card made amends with other stellar papers on labor economics using even better research design. Like his paper on observing labor market outcomes of 125,000 Cuban immigrants coming to Florida in 1980 (you know, the plot for Brian de Palma's Scarface). The point here was to show how one should think about conducting natural experiments in the social sciences. It’s not about using some profound model or complex math. It’s about a change in the paradigm of drawing explicit conclusions from correlations and trends. Finally, I want to end by paying homage to professor Alan Krueger, who tragically took his own life in 2019, and who would have surely been up there yesterday, given that two of the most important contributions made by Angrist and Card, cited by the Nobel committee, were in both cases co-authored with Krueger. To borrow one of his quotes: “The idea of turning economics into a true empirical science, where core theories can be rejected, is a BIG, revolutionary idea.” P.S. For those who want to learn more, I recommend a reading list: - First, start with the Nobel committee's explanations. A popular version - And a very good in-depth version - Then, if you’re a beginner, no better place to start than Angrist and Pischke: Mastering Metrics - Then evolve into Dunning: Natural Experiments in the Social Sciences - Then go for: Angrist and Pischke: Mostly Harmless Econometrics (this was my bible for causal inference) - And finally, for advanced reading, use: Gerber and Green: Field Experiments.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506632.31/warc/CC-MAIN-20230924091344-20230924121344-00243.warc.gz
CC-MAIN-2023-40
10,131
34
https://www.digitalhome.ca/threads/ota-network-status-globaltv.144970/page-12
code
Global sends the SD channel for the benefit of cable companies without a direct fibre link, so that they can receive an appropriate 4:3 version of the channel in the short term. I've heard that it is not they intention to maintain this for the long term. Canadian rules for sub channels re different than the American ones. Once an American station has an ATSC license, they can do what they want with the sub channels, as long as they maintain Educational/Instructional quotas on each channel, and they pay the FCC a cut for any third-party data services that they carry.In Canada, subchannls need to be specifically licensed by the CRTC. So far, the only subchannels licensed in Canada is the CFTV-DT in Leamington, and community-based repeaters like the Logan Lake TV Society For Global to air anything other that the SD version of their main channel, they would need to get a separate CRTC license for any sub channel, which I imagine one they might apply for someday. (Or more likely, launch a mobile ATSC version of the main channel, which again would not require a separate license.)
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00584.warc.gz
CC-MAIN-2023-14
1,090
5
https://github.com/farmio
code
Create your own GitHub profile Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 40 million developers.Sign up 226 contributions in the last year Created a pull request in XKNX/xknx that received 5 comments RemoteValue: store value directly in attribute - calculate value once when new payload is set instead of every time it is read.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321696.96/warc/CC-MAIN-20190824194521-20190824220521-00485.warc.gz
CC-MAIN-2019-35
399
5
http://www.edugeek.net/forums/nix/print-21468-vmware-workstation-installation-question.html
code
vmware workstation installation question Trying to install VMware-workstation-6.0.4-93057 onto gentoo, using the included vmware-install.pl script. But it's asking something Im not sure about and cant seem to find an answer for. Its the 2nd question i dunno the answer for. I put /etc/init.d/ but thats not right... Anyone offer a suggestion plz? oasis vmware-distrib # perl vmware-install.pl Installing VMware Workstation. In which directory do you want to install the binary files? What is the directory that contains the init directories (rc0.d/ to
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770324.129/warc/CC-MAIN-20141217075250-00140-ip-10-231-17-201.ec2.internal.warc.gz
CC-MAIN-2014-52
551
7
http://www.ausphotography.net.au/forum/showthread.php?40662-Installing-cs4-on-my-vista-computer
code
So I have a 'borrowed' version of CS2 on my computer along with BDSIZER and a couple of other photography editing bits but NOW I want to upload my NEW FULL EXTENDED VERSION CS4 on my computer. Could someone please advise me of what I need to take OFF and other things I need to do to avoid problems with loading this program on my computer please- I read somewhere you had to do something else to previous versions BEFORE you try to delete say CS2 from the computer or the new upload wont work. Advise from people well schooled in computers and loading CS4 (eg Kym, Rick, etc) would be much appreciated- Im not going to try loading it till the weekend though. .no hurry..just computer illiterate amongst other things I need help: please
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719908.93/warc/CC-MAIN-20161020183839-00358-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
736
4
http://forums.cnet.com/7723-7810_102-57506/which-antivirus-utility-protects-your-computer/?messageId=1560728
code
Don't need stinkin' AV apps with Linux I run SUSE 10 with all security updates. There's no need for an antivirus app (although I use one to scan a connected Windows machine). For all of the people who say that it's just not targeted because Windows is more popular, check out http://www.theregister.co.uk/2003/10/06/linux_vs_windows_viruses/ "Due to the strong separation between normal users and the privileged root user, our Linux user would have to be running as root to really do any damage to the system. He could damage his /home directory, but that's about it. So the above steps now become the following: read, save, become root, give executable permissions, run. The more steps, the less likely a virus infection becomes, and certainly the less likely a catastrophically spreading virus becomes. And Linux users are taught from the get-go to never run as root." Since I made my switch from Windows to Linux, I have never experienced any slowdowns or need to reinstall my OS due to spyware, viruses, or just because it "needs to be reinstalled" (which seemed to happen every six months or so with Windows). Was this reply helpful? (0) (0)
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00215-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
1,146
6
https://developpaper.com/my-thoughts-on-using-windows-11/
code
My thoughts on using Windows 11 !!! This article will behttps://blog.projectoms.com/p…Continuous update!!! For the best reading experience, visit:https://blog.projectoms.com/p… Recently, I installed windows 11 on the physical machine. I have to say that the moment I saw the desktop after boot, I felt that I was using MacOS As we all know, windows 11 has no great changes at the bottom kernel level, and the interface is brand-new. Now there are rounded rectangles, frosted glass, translucent background everywhere in the system Next, I’ll show you some changes in the interface and functions This time, Microsoft’s internal preview version 10.0.22000.100 is used (or 10 is ridiculous) First, let’s take a look at the lock screen interface after startup. It turns out that the time in the lower left corner is in the middle, and the font of time is also thickened I have to say that this improvement is still very good-looking. Also, I don’t know if I make complaints about the preview version, so I think it’s very ugly to turn off, restart and update the interface. Pure black, nothing but the text in the center and the loading icon In addition, the desktop background of windows 11 is also OK. Here I put an original 4K picture, which can be eaten by myself. Take a look at the search interface and notice that the colorless magnifying glass at the bottom has changed into a pattern Common applications (as long as the GUI uses Windows SDK applications) have now become rounded rectangle. The button also becomes a rounded rectangle The notification box is also a rounded rectangle Part of the content has not become a rounded rectangle. Update 2021 / 08 / 06: this issue has been fixed in preview version 22000.120. 22000.100 not repaired In order to benchmark the control center function (Bushi) of MacOS Big Sur, windows 11 also has a control center.. Many default applications have new icons, which is very similar to the operation of MacOS. The settings now look like this Here, I’m going to see the enemy of windows 10: is the control panel still there, and the result The control panel is still there, and the icons and interfaces have been redesigned. Does this mean that they will never be removed[ [cover your face and laugh] Take a look at the user account control interface (administrator permission request interface), which is also OK. Update on August 6, 2021: the ALT + Tab interface we used to play at school now looks like this. File explorer now looks like this. Update on August 6, 2021: the application preview interface looks like this. By the way, this is the newly added widget function (?) Do you feel like you’ve seen it somewhere? There are also multiple desktop functions. Note that this is not to create a new desktop. By the way, the old screenshot tool is still there, but the icon has been redesigned. Also, Microsoft’s own input method has become flat… There is also a new automatic split screen function. And the “new” and “redesigned” Microsoft Store. One more thing: now the right-click menu has become like this. To be honest, it’s a little uncomfortable. After clicking to display more options, a menu similar to the original one will pop up On August 5, 2021, the newly designed shortcut icon looks like this The biggest regret of this preview version is that it still does not support running Android applications. I hope Microsoft will release the supported version as soon as possible. After all, Microsoft once said it would release the official version of windows 11 this year. There is also a great regret that the computer blue screen (because it is a preview version, it is a green screen) interface has not changed. Update 2021 / 08 / 05: some people say that now the blue screen has become the same black interface as the shutdown and restart update, but it seems that my windows 11 can only have a green screen, not a “black screen”. Update 2021 / 08 / 05: the current audio prompt box (the one in the upper left corner) is still the style of windows 10 (black, no frosted glass, no rounded rectangle). Update on August 6, 2021: it is found that some icons are still the same and have not been redesigned. Update 2021 / 08 / 06: Microsoft’s own office software has not adapted to the new windows 11 style?! Update on August 6, 2021: in the early morning of August 6, 2021, Microsoft released windows 11 insider preview build 22000.120. For details, see:https://blogs.windows.com/win… I also updated this version for a period of time. Next, I’ll summarize this version for you. This version still does not support Android features. The app store looks a little better. Dozens of bugs were fixed. Dozens of new bugs were found. Update on August 13, 2021: in the early morning of August 13, 2021, Microsoft released windows 11 insider preview build 22000.132. For details, see:https://blogs.windows.com/win… At present, my experience of using Windows 11 is about this. I hope you can like my article.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00381.warc.gz
CC-MAIN-2021-43
5,002
47
https://forums.adobe.com/thread/1256352
code
Flash Player no longer comes built-in on the new Kindle Fire second generation or the Kindle Fire HD (they are released in September 2012). Adobe is making Flash obsolete, so it's no longer on any new tablets or mobile devices. There's a good workaround to make Flash work on the Kindle Fire, however. See here for the details: You need to install the "1 mobile market" from here http://www.1mobile.com/app/market/. Then install the Dolphin Browser from the "1 mobile market." Then install adobe flash from here for the Kindle Fire HD http://www.mediafire.com/?33ew28ztmlar44e and here for the regular Kindle Fire http://www.mediafire.com/?ktt26yv3j6jt1iv. Hope that helps.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00610.warc.gz
CC-MAIN-2018-09
673
4
https://wiki.debian.org/AxelRyll
code
Email: <aryll AT SPAMFREE web DOT de> Debian User since a long time I don't remember when I start with Linux. I have tried several distributions but when I found Debian Linux 2.2 (Potato) it was the first release I remember that I used as a small firewall at home. I'm a native German speaking old person and I'm interested in technical stuff. I would like to train my knowledge about the English language and learn more about Linux and Debian. I was working a long time for a German company as service technician and at the moment I try to do the job of an network administrator. And I hope to give something back to the community and help others to make Linux and especially Debian more successful.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100527.35/warc/CC-MAIN-20231204083733-20231204113733-00563.warc.gz
CC-MAIN-2023-50
700
5
http://lamouette.de/mitmach.php?lang=en
code
Interactive writing project for all who love to play with languages and words. Soon available on htpp://www.wordadventures.net - Collaborative Stories - The idea: One story written alternating by different authors. The interaction with other authors turns our stories into a special adventure and brings a lot of fun. With each new sentence the story can take a turn. We wrote already some of those stories which will be used as a starting point for the project. For example the (german) story - Play on Words - Apart from that, the website will offer many plays on words, like e.g. catch phrase, endless words, metamorphose, explain words that were born when metamorphosing and lots more...
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823614.22/warc/CC-MAIN-20181211083052-20181211104552-00295.warc.gz
CC-MAIN-2018-51
691
13
https://www.cryptonftasia.com/apikey/
code
Bybit has an API key. If you want to automate Bybit trading using third-party apps, etc., you will need an API. However, some people don’t know how to get the API, so I’m writing an article. What are APIs? API is a term often used in software development work. API is used in various applications, and Bybit also uses API. If you’re a programmer, you should be familiar with this term, but others may not. API is an abbreviation for Application Programming Interface. It is a mechanism for calling and using the general-purpose routine functions and managed data of a program from other external programs. Bybit API Features There are many things you can do by setting Bybit’s API. The advantage of setting the API is that you don’t have to build a program from scratch yourself, so you can use the function just by referring to it. In other words, even if you are not a programmer, you will be able to use the functions. By setting up Bybit’s API, you can check the latest price of virtual currency on your own application and strengthen security by introducing a high-level security system. automated trading system This API is often used by people who operate automated trading systems. By setting the API function, you will have the function to start automatic trading. By diverting API functions, you can also make a profit without manual work yourself. Furthermore, it is possible to be able to check virtual currency information sites. Difficulty changing settings By introducing the API, there is no need to recreate the program from scratch. On the other hand, however, you may find the configuration cumbersome. In order to introduce APIs and change settings, various operations must be performed, which can be troublesome for those who do not know much about it. You don’t need as much knowledge as a programmer, but you do need some programming knowledge. API setting method Now, I will explain how to set up Bybit’s API. First of all, you need to open an account, so please start by opening an account. Please refer to the article below. Also note that two-factor authentication must be completed in order to use this feature. Please login. Click the avatar on the top right of the home screen to open the menu, then click “API”. You can create one by clicking Create New Key. Enter the name of your API key. Any name is OK. Next, let’s configure API key permissions. The API key permission settings will be individually set below, including whether to read or write. After adding the IP address, click “Submit”. Next, perform two-factor authentication. Open Google Authenticator and enter the verification code. After entering the verification code, click “Confirm”. Now add your API key. You have now created your API key. You can actually use it as a tool. How to add API It doesn’t mean you can’t create more APIs once you create one. It is also possible to add The addition method is exactly the same as the acquisition method above. For the information of the key to be added, enter the necessary information in the same way as when creating the first time. There is also the case that you no longer need the API key after creating it. In that case, it is also possible to delete the API key. You can also delete it by clicking the yellow pen in the “IP address is associated” item.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00788.warc.gz
CC-MAIN-2023-50
3,339
19
https://denvermobileappdeveloper.com/blog-details/Importance%20of%20Networking%20for%20Freelance%20iOS%20Developers
code
Importance of Networking for Freelance iOS Developers in 2024 Introduction to Networking for Freelance iOS Developers Are you a freelance iOS developer seeking advancement? Networking may be your hidden ingredient! In today's fast-paced technological business, networking can lead to intriguing chances. Let's discuss how networking for freelance iOS developers might boost your career. The Benefits of Networking in the Technology Industry Technology industry networking assists freelance iOS developers greatly. Connecting with other experts opens doors to career-advancing opportunities. You can keep up with iOS development developments by networking in the IT community. Networking lets you learn from experienced developers, find job vacancies or freelance assignments, and collaborate on novel ideas. You can also demonstrate your skills and knowledge to a wider audience, enhancing your visibility in the competitive tech market. Furthermore, networking builds credibility and trust among colleagues and potential clients. Attending industry events, online forums, and meetups will help you learn more and stay ahead in this ever-changing area. Remember, networking is about sustaining connections for professional advancement, not just making them. Building a Professional Network as an iOS Developer Today's competitive IT sector requires iOS developers to network. Networking lets you meet like-minded people, clients, and enterprises with fresh opportunities. Networking at industry events, seminars, and meetups is fantastic. Discussing ideas with others can lead to partnerships or employment offers. Use LinkedIn to network with industry leaders and display your abilities. Join relevant groups and participate in discussions to remain current on industry developments. Reaching out to previous colleagues, mentors, or acquaintances in the sector can lead to new tasks or referrals. Consider how you may help others when networking. Building long-term relationships on respect and support is advantageous. Tips for Effective Networking Some networking ideas for freelance iOS developers will help you build valuable tech contacts. Meet like-minded experts and possible clients at industry conventions. Prepare an elevator pitch that summarises who you are and what you do. This helps you articulate your expertise in networking situations. Reach out to other developers on GitHub or Stack Overflow. Developer relationships can lead to collaborations and jobs. Join iOS development-specific professional networking groups or forums. You can network with industry peers and follow industry trends. Send tailored emails or connect on LinkedIn with new connections following networking events. Long-term freelance iOS relationship building requires maintaining these ties. Utilizing Social Media for Networking Social media has transformed digital communication. LinkedIn, Twitter, and GitHub can help freelance iOS developers network. LinkedIn lets you exhibit your talents and projects, network with industry leaders, and keep up with trends and job openings. Twitter lets developers, tech businesses, and potential clients interact in real time through tweets and pertinent conversations. You can contribute to open-source repositories or exhibit your work on GitHub for collaborative coding projects. Actively using these platforms builds trust and visibility among iOS developers. You can make significant connections that could lead to interesting employment opportunities by sharing insights, getting advice from peers, or attending virtual tech events on social media. Collaborating with Other Developers and Companies Collaboration with other developers and firms can help freelance iOS developers grow and learn. Working with industry peers can improve your abilities and initiatives by providing insights, feedback, and support. Collaboration on many projects lets you draw on each other's skills and create new solutions. It also facilitates information sharing and ongoing growth by discussing ideas and best practices with peers. Collaborations improve your professional network and introduce you to possible clients or employers who may like your teamwork. Strong tech community interactions can lead to referrals and mutually beneficial partnerships. Collaboration is a two-way street; give your talents and help in exchange for others' feedback. Join the IT industry's collaborative culture to succeed as a freelance iOS developer. The Impact of Networking on Career Growth and Opportunities Networking is vital for freelance iOS developers' careers. Connecting with other developers can lead to new opportunities and partnerships. Building a strong network helps people keep up with trends, innovations, and job openings. Networking events, online communities, and social media allow iOS developers to demonstrate their skills. This exposure expands one's professional network and tech community visibility. Networking with other developers and firms can lead to initiatives and partnerships that improve skills and portfolios. These ties typically lead to client or job referrals, helping freelance developers succeed. In today's competitive tech world, networking is crucial. Freelance iOS developers must network to grow and build credibility. Networking is crucial for freelance iOS developers. Active networking leads to new opportunities, collaborations, and professional advancement. Building partnerships with other developers and firms can lead to new initiatives and significant technology industry connections. Networking is about developing lasting relationships that benefit both sides, not merely meeting people. Remember, networking is powerful for freelance iOS developers. Accept it, put yourself out there, and watch your career grow in unexpected ways. Happy networking!
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817819.93/warc/CC-MAIN-20240421194551-20240421224551-00410.warc.gz
CC-MAIN-2024-18
5,818
36
https://openbase.com/python/fake-bpy-module-2.78
code
fake-bpy-module is the collections of the fake Blender Python API modules for the code completion in commonly used IDEs. fake-bpy-module uses typing module and type hints which are available from Python 3.7. Check your Python version is >= 3.7. fake-bpy-module can be installed via a pip package, or pre-generated modules. You can also generate and install modules manually. fake-bpy-module is registered to PyPI. You can install it as a pip package. pip install fake-bpy-module-<version> If you install fake-bpy-module for Blender 2.93, run below command. pip install fake-bpy-module-2.93 If you install fake-bpy-module for Blender latest build (master branch daily build powered by nutti/blender-daily-build), run below command. pip install fake-bpy-module-latest Note: For PyCharm users, change the value idea.properties file to more than 2600 because some modules have the issue of being too big for intelliSense to work. Download Pre-generated modules from Release page. The process of installation via pre-generated modules is different by IDE. See the installation processes as follows for detail. You can also generate modules manually. See Generate Module for detail. If you want to report bug, request features or discuss about this add-on, see ISSUES.md. Note: Registration of blender.chat is required for accessing fake-bpy-module channel. If you want to contribute to this project, see CONTRIBUTING.md. Indie Game/Application Developer. Especially, I spend most time to improve Blender and Unreal Game Engine via providing the extensions. Support via GitHub Sponsors
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780054023.35/warc/CC-MAIN-20210917024943-20210917054943-00422.warc.gz
CC-MAIN-2021-39
1,579
21
https://www.eclipse.org/mosaic/docs/extending_mosaic/event_scheduler/
code
The different modules of the Application Simulator communicate over events that are triggered at a specific simulation time. The following classes and interfaces model theses events. Event contains the information that is necessary to process an event. An event describes when it should be processed and which information is processed. Moreover an event has an assigned priority. Attributes of Event The class Event contains the following attributes: long time: defines the time when the execution of the event is triggered. long nice: defines the priority of the event. When multiple events are scheduled for the sametime, the events are ordered in ascending order. List<EventProcessor> processors: is a list of components that shall process the event. Object resource: is an object that contains additional information designated for the processor of the event. The resource can be any object. Methods of Event Event(): There are multiple constructors for Event with different parameters. Every constructor sets default values for the attributes that are not defined by the arguments of the constructor. Event newTime(long time): allows the creation of a new event with a new execution time based String getResourceSimpleClassName(): returns the class name of the resource as String. int compareTo(Event event): implements the standardized Java interface Comparable. Toorder the events, first the time of the event is evaluated. In case the times are equal, the priority of the events is compared. EventManager defines the method void addEvent(Event event) that needs to be implemented to add an event to the execution. EventScheduler extends the interface EventManager and is used for classes that trigger events. Methods of EventScheduler boolean isEmpty(): returns true if the scheduler contains no elements, otherwise it returns false. long getNextEventTime(): returns the time of the next event. long getScheduledTime(): returns the time when the last event has been executed. List<Event> scheduleEvents(long time): returns a list of objects that are scheduled for a certain simulation time. Set<Event> getAllEvents(): returns a set of all events that are considered by the scheduler. EventSchedulerImpl is an implementation of the interface EventProcessor defines how the execution module gets the events. The execution module therefore has to implement the following methods: void processEvent(Event event): The module processes the event. boolean canProcessEvent(): returns true when the module is currently able to process new events, otherwise it returns false. In some situation it is useful to intercept events before they actually reach the intended processors. By intercepting the events it is possible to apply further monitoring and to filter which events the event processors receive. The class EventInterceptor is used to construct objects of the type In the constructor it is possible to specify an EventManager that manages the intercepted events. Moreover, objects of the type EventProcessor can be specified that shall process the intercepted events. InterceptedEvents extends the class Event. It is used to provide type safe allocations of events that shall be intercepted.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647895.20/warc/CC-MAIN-20230601143134-20230601173134-00159.warc.gz
CC-MAIN-2023-23
3,198
36
https://communities.bmc.com/thread/173223
code
what kind of database server are you using - oracle or sqlserver ? for oracle bsa and bdssa (and bdssa_portal) must all be in separate instances if they are on the same system. for sqlserver that's not an issue. the reason for separating them on different database services is performance. if you plan to have a high usage site for bsa and bdssa you need to have a database server that can handle whatever load you are throwing at it. running lots of jobs and running an ETL all at the same time is very resource intensive. so that can work fine on a single database server given the right hardware or you might need separate systems.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00064.warc.gz
CC-MAIN-2018-51
634
4