url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://forums.splashdamage.com/t/bugs-in-execution-mode/233064 | code | Hello, here some bugs with execution mode servers :
- The best player at the end of the round show the player with the most xp of the whole server and not the best player of the current round.
- There are no execution mode filter in the server browser.
- Sometimes when trying to join an execution mode server with a match already started you can have a black screen forever. But you can still access to the console and typing the command “refresh manifests” will allow you do join the server (I dont remember the exact command name). EDIT: The command “refresh manifests” don’t do anything, you just have to wait 1-2min in the black screen.
- When you finish to plant the bomb after that the timer is finished you have both voice “round lost” and “overtime activated”.
That’s all I noticed. Also thank you to make execution back, it feels so good. | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00452.warc.gz | CC-MAIN-2019-51 | 868 | 6 |
https://discord.me/trulygaming321 | code | This server is a gaming community server for BBB Family. Joining is free and no money should be paid. Only Payment available for Donations in Live streaming games like multiplayer, TDM, Battle Royale, etc. No spam messages are allowed and no abusive words are allowed. No political talks and no strong words used. Anyone can join this group just before joining rules should be followed. Welcome to our group. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00292.warc.gz | CC-MAIN-2023-06 | 408 | 1 |
https://www.brainkart.com/article/Knowledge-Organization-and-Management_8982/ | code | Knowledge Organization and Management
The advantage of using structured knowledge representation schemes (frames, associative networks, or object-oriented structures) over unstructured ones (rules or FOPL clauses) should be understood and appreciated at this point. Structured schemes group or link small related chunks of knowledge together as a unit. This simplifies the processing operations, since knowledge required for a given task is usually contained within a limited semantic region, which can be accessed as a unit or traced through a few linkages.
But, as suggested earlier, representation is not the only factor, which affects efficient manipulation. A program must first locate and retrieve the appropriate knowledge in an efficient manner whenever it is needed. One of the most direct methods for finding the appropriate knowledge is exhaustive search or the enumerations of all items in memory. This is also one of the least efficient access methods. More efficient retrieval is accomplished through some form of indexing or grouping. We consider some of these processes in the next section where we review traditional access and retrieval methods used in memory organizations. This is followed by a description of less commonly used forms of indexing.
A “smart” expert system can be expected to have thousands or even tens of thousands of rules (or their equivalent) in its KB. A good example is XCON (or RI), an expert system which was developed for the Digital Equipment Corporation to configure their customer’s computer systems. XCON has a rapidly growing KB, which, at the present time, consists of more than 12,000 production rules. Large numbers of rules are needed in systems like this, which deal with complex reasoning tasks. System configuration becomes very complex when the number of components and corresponding parameters is large (several hundred). If each rule contained above four or five conditions in its antecedent or If part and an exhaustive search was used, as many as 40,000-50,000 tests could be required on each recognition cycle. Clearly, the time required to perform this number of tests is intolerable. Instead, some form of memory management is needed. We saw one way this problem was solved using a form of indexing with the RETE algorithm described in the preceding chapter, More direct memory organization approaches to this problem are considered in this chapter.
We humans live in a dynamic, continually changing environment. To cope with this change, our memories exhibit some rather remarkable properties. We are able to adapt to varied changes in the environment and still improve our performance. This is because our memory system is continuously adapting through a reorganization process. New knowledge is continually being added to our memories, existing knowledge is continually being revised, and less important knowledge is gradually being forgotten. Our memories are continually being reorganized to expand our recall and reasoning abilities. This process leads to improved memory performance throughout most of our lives.
When developing computer memories for intelligent systems, we may gain some useful insight by learning what we can from human memory systems. We would expect computer memory systems to possess some of the same features. For example, human memories tend to be limitless in capacity, and they provide a uniform grade of recall service, independent of the amount of information store. For later use, we have summarized these and other desirable characteristics that we feel an effective computer memory organization system should possess.
It should be possible to add and integrate new knowledge in memory as needed without concern for limitations in size.
Any organizational scheme chosen should facilitate the remembering process. Thus, it should be possible to locate any stored item of knowledge efficiently from its content alone.
The addition of more knowledge to memory should have no adverse effects on the accessibility of items already stored there. Thus, the search time should not increase appreciably with the amount of information stored.
The organization scheme should facilitate the recognition of similar items of knowledge. This is essential for reasoning and learning functions. It suggests that existing knowledge be used to determine the location and manner in which new knowledge is integrated into memory.
The organization should facilitate the process of consolidating recurrent incidents or episodes and “forgetting” knowledge when it is no longer valid or no longer needed.
These characteristics suggest that memory be organized around conceptual clusters of knowledge. Related clusters should be grouped and stored in close proximity to each other and be linked to similar concepts through associative relations. Access to any given cluster should be possible through either direct or indirect links such as concept pointers indexed by meaning. Index keys with synonomous meanings should provide links to the same knowledge clusters. These notions are illustrated graphically in Fig 9.1 where the clusters represent arbitrary groups closely related knowledge such as objects and their properties or basic conceptual categories. The links connecting the clusters are two-way pointers which provide relational associations between the clusters they connect.
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00107.warc.gz | CC-MAIN-2019-39 | 5,471 | 13 |
https://atlex00.com/tags/tensorflow/ | code | Original page https://www.tensorflow.org/tutorials/keras/save_and_load Show model details import tensorflow as tf def create_model(): model = tf.keras.models.Sequential([ tf.keras.layers.Dense(512, activation='relu', input_shape=(784,)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) return model # Create a basic model instance model = create_model() # Display the model's architecture model.summary() Load MNIST data # MNIST (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_labels = train_labels[:1000] test_labels = test_labels[:1000] train_images = train_images[:1000].
We can install TensorFlow via pip easily, but we should care a little bit more if you want to enable GPU. Requirements https://www.tensorflow.org/install/gpu#software_requirements #Here is how I installed my NVIDIA GPU environment. Install Pre requirements sudo apt-get install libcupti-dev #already installed in my case echo 'export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc source ~/.bashrc Install cuDNN Download a compatible version from https://developer.nvidia.com/rdp/cudnn-download. tar -xzvf cudnn-10.2-linux-x64-v22.214.171.124.tgz sudo cp cuda/include/cudnn*.h /usr/local/cuda/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn*.
TensorBoard We can easily visualize our neural networks written by TensorFlow in a graph format with TensorBoard (it can more actually). https://www.tensorflow.org/tensorboard/get_started Install As of 2020/07/09, TensorBoard is installed when you install TensorFlow with pip. pip install -U tensorboard <- it already installed when you install tensorflow with pip. it coflict and cause problem Simple sampl code First, create a smiple model. mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.
Official https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/extend/architecture.md TensorFlow has master, client, worker components. You can imagine a distributed system, and it is correct. TensorFlow is designed to make a cluster. Distributed TensorFlow And here is the official document about distributed TensorFlow with sample codes. https://github.com/tensorflow/examples/blob/master/community/en/docs/deploy/distributed.md Deprecated: the link was expired Another Sample Here is sample cluster code by IONOS (one of the biggest German ISP.) https://www.ionos.de/community/server-cloud-infrastructure/tensorflow/einrichten-eines-verteilten-tensorflow-clusters-auf-cloud-servern/ You can see there is parameter servers and worker servers.
Intro - Official quickstart for beginners https://www.tensorflow.org/tutorials/quickstart/beginner Import TensorFlow library and load official MNIST dataset. import tensorflow as tf mnist = tf.keras.datasets.mnist Split MNIST dataset into training and dataset. and regularize (from 0 to 1). (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 The meaning of values is quoted below. https://conx.readthedocs.io/en/latest/MNIST.html The MNIST digits are grayscale images, with each pixel represented as a single intensity value in the range 0 (black) to 1 (white). | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00354.warc.gz | CC-MAIN-2022-49 | 3,383 | 5 |
https://drogger.hatenadiary.jp/ | code | Tap the [Stop] button to disconnect the Bluetooth connection and terminate the service.
Background service notification
Communication between Drogger GPS and DG-PRO1 is done in the background. Unnecessary to bring the Drogger GPS to the front or to display the screen. If it is running in the background, "Drogger GPS Location service" will be displayed in Android notification area.
Logging (Version 1.6.27 or later)
Logging can record the position information etc. of DG-PRO1 under the following conditions.
You can choose from 4 types: None / CSV / GPX 1.0 / GPX 1.1 format. The default is "None".The 1.0 format of GPX 1.0 / 1.1 depends on the app that analyzes this data, the 1.0 format can record compactly
Save speed and direction
Also save speed and direction in addition to the location information
Also save accuracy information in addition to location information
Frequency of logging
Specify the time distance of log recording."All measurement data" is the same as the update rate.
Maximum a log size(MB)
Specify the maximum size of one log in MB.When it becomes larger than this it will automatically be recorded in a new file
Size of log space(MB)
The log takes unlimited disk space. By specifying this size you can delete it automatically from the old log.
Details of logging
In CSV format, a new file will be created each time Bluetooth connection is made.
In GPX format, a new track segment (Trkseg) is started each time Bluetooth connection is made.
In GPX format, a new file will be created at the time of service start.
When using the automatic reconnection function, service Start and Bluetooth connection are not paired.
When reaching the maximum size of one log, a new file is created regardless of format.
When creating a new file, the size of the log area is checked and old logs are deleted as necessary.
The file name is automatically generated in the format of [date_time.format].The saved folder is displayed on the [Log List] windows.
Logs can be performed regardless of whether Mock service is enabled or disabled.
Estimated logging size
Estimated time to record to 1 MB size. It is shown for each format of full information.
Specify the port number on which the TCP server listens.
First, if the TCP client is enabled on the output service type, the NMEA message of the receiver will be enabled when the Bluetooth is connected.
Next, I try to connect to the server.Once connected, send NMEA message to TCP server. If the TCP connection is inadvertently disconnected, try reconnecting every 100 milliseconds.
Number of times location updates in 1 second (real measured value)
[Enabled]DG-PRO1 Location data is used for other app [Disabled]DG-PRO1 Location data is not used for other app
Power mode of the receiving module
Logging ... or No logging
Indicates the operation status of logging data
Displays the internal software version of the receiving module. Switchable ON / OFF with right switch
Displays location data (longitude, latitude, speed, direction, accuracy, number of satellites used etc.). This value displays the original value received from DG-PRO1.
Regardless of the measurement rate of DG-PRO1, the value is updated every 1 second.
The following string are appended according to the state after the above status.
Differential correction data by SBAS and RTK is used
RTK is the FLOAT mode
RTK is FIX mode
In addition, RTK (real-time kinematic) is not supported by DG-PRO1. It is prepared for the future.
Show location traces on Google Maps. The number of points to be displayed is 1800 points. If it exceeds 1800 points, it will be removed from the old one.
For example, If the update rate is 10 Hz, it is 600 data per minute, so it will be a locus for 3 minutes.
Turn on the Picture switch to display the satellite picture of Google Maps. Turn OFF, the standard map mode is set.
Unlike other view , this map display retrieves location values from AndroidOS, It is not direct reciving from DG-PRO1.
Since it is location values of AndroidOS, it receives and displays the same data as other apps of using GPS.
Therefore, when Mock Enabled , it is the location data of DG-PRO1. In the case of Mock Disabled , the built-in GPS will be used.
Graphically display the satellite information received by DG-PRO1.
This information is updated once per second (from after Ver 23 ) (Prior to Ver 22, it is updated once per 10 position updates), regardless of the update rate of DG-PRO1.
Satellite number Meaning of first character G:GPS S:SBAS Q:QZSS R:GLONASS E:Galileo B:BeiDou I:IMES Meaning of letters after numbers D:Differential correction data R:RTCM*2 MSLAS*3
Satellite direction (North0° +Clockwise)
Satellite Elevation angle (Horizontal0° Just above90°)
Satellite Signal-to-noise ratio (Signal to Noise Ratio)(dBHz)
Used to navigation
Normal receiving, but not used for navigation
Confirmed the signal but can not use it
Received a signal but can not check it
Clear map trajectory
Tap the icon at the top left of the map.
Explanation of the settings is in each item of the app, please see it.
The following is a supplementary explanation
Automatic reconnect interval
Specify the time (seconds) to try to reconnect to automatically when the Bluetooth connection is lost. If zero is specified, automatic reconnection is not performed. The default is zero.
When connecting to a car's power supply, and using it in navigation etc., the Bluetooth connection will be lost with the ignition OFF.
After that, when it becomes possible to connect with the ignition ON, it will be connected automatically.
Connection attempts are made on Android devices.
It consumes Android's battery for too short a time.
We recommend more than 30 seconds. Also set [Number of connection retries] to 1 or 2.
Automatic startup at boot
Used for car navigation systems that Android is shutdown by turning off the ignition.
When this setting is turned ON, when Android is started (booted) with ignition ON, service starts automatically and Bluetooth connection is start.
App to be started after connection
You can specify a app to launch when Bluetooth connection is completed. Select from the list of apps on the your device. If launching of the app is unnecessary, select "NONE" at the top of the list.
The power mode setting controls the power-saving of the receiver module.
More bottom of the table, the more power-saving.
Power saving is ineffective.
Automatically save power within the range that does not affect measurement.
Measure periodically. It is in standby state when saving power.
Power on regularly. At power saving, it is almost same as power OFF.
For measurement / update rate, power saving setting takes precedence. For example Measure at 10 Hz, the power mode must be Full power or Balanced.
The setting of [Power mode - Screen off] can be used with any setting while Android screen is displayed. Conversely, if you want to continuously use the position information (even if the screen is off), select Balance.(Ex, Navigate Google Map with screen OFF, with voice guidance alone)
Given the accuracy, it is better to have a lot of received satellites , but if you need a high update rate, adjustment is necessary.
Depending on the number of received satellites and the number of GNSS selections, there may be a delay in the CPU processing of the receiving module and the update rate may decrease.
Adjustment is required according to conditions such as GNSS selection, minimum elevation angle of satellites and minimum signal level.
Below is a selection example of update rate and GNSS.
In addition to the above, reduce the number of satellites used and improve the update rate and accuracy by the following method.
Lowest satellite elevation angle
Filter the satellite using the elevation angle of the satellite seen from the receiver. Satellites located at angles smaller than the angle specified by the lowest satellite elevation angle are not used in navigation.
The accuracy is the best when the satellite is directly overhead (90 °).
When angle is decreases, reflected waves reflected from buildings, mountains, ground, etc. will be more easily received.
Reflected waves are degraded in accuracy because the route is different from the true distance.
In places where there are many buildings, increase this angle and increase the choice of GNSS, you can improve accuracy.
Minimum signal level
Filters the satellite using the received signal strength (S / N ratio). Satellites with signals smaller than the value specified for the lowest signal level will no longer be used in navigation.
By adjusting the update rate, GNSS selection, minimum elevation angle, and minimum signal level according to the usage of DG-PRO1, it is possible to set the accuracy to be improved while securing the necessary update rate.
A-GNSS is a mechanism to shorten the time from when the receiver is turned on until accurate positioning is possible.
Normally, the receiver receives its arrangement etc. from the satellite, but it can not be received properly in a bad radio condition, or it takes a very long time.
Drogger GPS receives the satellite's location information from the Internet *6 and sends it to DG-PRO1.
Satellite position information of GNSS enabled by setting will be transmitted when Bluetooth is connected,.
If you change the validity / invalidity of GNSS, you can send the satellite position information of the newly enabled GNSS by stopping Bluetooth once and starting again.
This information is for developers who create apps.
By using the following code, you can start Drogger GPS service from your app.
Intent intent = new Intent();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
End detection of Mock Provider
When the Drogger GPS service terminates due to disconnection of Bluetooth communication,void onLocationChanged(Location location) will not be called.
This is the case also when the position can not be measured in a tunnel or the like, and it is necessary to make a judgment in other ways as to which situation it is.
Although there is no explanation in the AndroidAPI documentation, when the Mock Provider service terminates, the normal built-in GPS service is enabled,
void onProviderEnabled(String provider) will be called. The application can get location information by calling requestLocationUpdates again.
*1:However, it is possible to know in each app whether it is information instead of built-in GPS.If there are some apps that do not use such information, in such apps DG-PRO1's Location information can not be used. | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00049.warc.gz | CC-MAIN-2019-13 | 10,489 | 111 |
http://www.beautylish.com/v/rivis/how-to-do-a-classic-smokey-eye-look-iman-inspired?ref=related | code | For those of you that follow my blog (link above) you've waited patiently for this tutorial from ym past FOTD "IMAN & NAOMI CAMPBELL INSPIRED MAKEUP LOOK". Thank you so much for being patient.
This look features the classic black smokey eye which is IMAN's signature look and NAOMI CAMPBELL deep lip.
Please subscribe to my channel and leave comments below.
FOLLOW MY BLOG!
Friend Me: www.facebook.com/sherryblossombeauty
Follow Me: www.twitter.com/mssherryblossom
I WOULD LOVE FOR YOU TO SUBSCRIBE TO MY CHANNEL. CLICK THE SUBSCRIBE BUTTON ABOVE! Thank You! | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164000853/warc/CC-MAIN-20131204133320-00088-ip-10-33-133-15.ec2.internal.warc.gz | CC-MAIN-2013-48 | 558 | 7 |
http://bikewalktn.blogspot.com/2010/04/making-neighborhoods-back-into.html | code | While this article is long and rambling - I found it interesting that even in Seattle - people struggle to find that sense of community within their own neighborhoods. But by making neighborhoods more more walkable, sociable, sustainable, and safe - maybe - just maybe we can get that freedom back that you and I had as kids. read more by clicking here:
I think David Roberts hits the nail on the head with his wrap up statement:
"..one of the biggest challenges in years ahead, as we attempt to densify and green our communities, will be retrofitting existing neighborhoods to increase walkability, sociability, sustainability, and safety. It's worth a minute of anyone's time to ponder how they could make their own surroundings more amenable to spontaneous, non-commercial, human-scale social interaction."
Isn't that what we are ultimately trying to accomplish through Bike Walk Tennessee? More greenways, green spaces, connectivity within our neighborhoods - be it with footpaths, bike paths, or safe roads for cycling.
Teaching a Blind Student to Ride - As a cycling instructor for GetAbout Columbia, I was asked if I would be interested in teaching Gretchen — a woman who is blind — to ride a bicycle. Wit...
2 days ago | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119080.24/warc/CC-MAIN-20170423031159-00112-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 1,230 | 6 |
http://search.proquest.com/docview/304792873 | code | Development of a multiregional framework and demonstration of its feasibility for strategic preparedness of interdependent regions
Any region in the US is a complex, interconnected, and interdependent system of systems with multiple stakeholders, spanning multiple sub-regions, and producing a very large number of commodities and products. This dissertation provides a holistic, methodological framework to model this large-scale and complex system from its inherent multisectoral and multiregional interdependencies for strategic preparedness decisions.
A component of this framework is developed by extending the Inoperability Input-Output Model (IIM), which currently generates average impact estimates across geography. Such average estimates may lead to overlooking geographically concentrated risks or significant cross-regional interdependencies, which are important in evaluating relevant strategic preparedness options. Part of this dissertation extends the IIM to model the interdependencies among the various regions in the US by introducing and developing the Multiregional IIM (MRIIM) and introducing the spatially explicit concepts of intraregional and interregional interdependency matrices, A* and T*, respectively.
The MRIIM possesses various properties, resulting from its construction and its databases, which guarantee unique solutions when estimating the cascade of disaster impacts across regions and which guarantee convergence when computational methods are applied. This dissertation also develops a geodatabase schema and computation methods supporting the deployment of the MRIIM for preparedness decisions. Finally, a custom MRIIM geodatabase was constructed on a WAMP+M (WAMP+M = Windows operating system, Apache HTTP server, MySQL database, PHP scripting language, +Mapserver) technology stack for open source deployment of the framework.
The MRIIM has been demonstrated using databases from Hurricane Katrina and simulation results from Sandia National Labs as inputs. In particular, one contribution of this dissertation is the demonstration that ignoring the interregional interdependencies leads to possible overestimation or underestimation of regional economic impacts under certain scenarios. Other aspects of this model and the preparedness framework are discussed. The preparedness framework in this dissertation combines contributions on several topics related to the development of a feasible multiregional interdependency analysis system for strategic preparedness.
Area planning & development
0999: Urban planning
0999: Area planning & development | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982931818.60/warc/CC-MAIN-20160823200851-00149-ip-10-153-172-175.ec2.internal.warc.gz | CC-MAIN-2016-36 | 2,591 | 8 |
https://community.smartbear.com/t5/SoapUI-Open-Source/validate-same-element-from-multiple-records-in-single-response/td-p/149634 | code | I have an API that will return x number of records. For instance it may return all book written by author x.
The response will contain the number of books returned, such as 4 book written by author x.
I want to be able to validate the same elements, such as publisher, for each book record returned.
Get all book written by X.
-> 4 books returned.
Loop throw return response for each book
- Check book 1 was published by Z
- Check book 1 was published by Y
- Check book 1 was published by X
- Check book 1 was published by W
I've written a script to do this but is there another way to handle this sort of situation that is faster than written it for each call? | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593994.14/warc/CC-MAIN-20200118221909-20200119005909-00135.warc.gz | CC-MAIN-2020-05 | 661 | 11 |
https://www.uogfsae.com/ | code | Gryphon Racing aims to create an environment where students can gain real-world experience working in a design team with the common goal of building a performance vehicle for FSAE competitions. This gives students the opportunity to explore and pursue their passion for knowledge through firsthand experience.
Formula SAE (FSAE) is a student design competition organized by the Society of Automotive Engineering. Each year, hundreds of university teams from around the world spend months designing, building and testing an open wheeled race car. These cars are designed to perform as an autocross vehicle and must be able to compete in a variety of events. Every season, teams travel to competitions around the world where they compete in both static and dynamic events.
A series of criteria are used to evaluate each team’s car potential as a production item. Eight events evaluate the teams design, each event having a set amount of points to be achieved resulting in the team with the highest amount of points at the end of the competition receiving first place. The eight events are spilt into two categories; static and dynamic. Static events consist of three presentations, design, cost and marketing while dynamic events are autocross, acceleration, skidpad, endurance and fuel economy. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644915.48/warc/CC-MAIN-20230530000715-20230530030715-00549.warc.gz | CC-MAIN-2023-23 | 1,295 | 3 |
http://stackoverflow.com/users/431080/ggkmath | code | Apparently, this user prefers to keep an air of mystery about them.
9 Does placing GPL licensed software on server qualify as 'distribution' if end user never sees it? [closed] sep 4 '10
6 1d linear convolution in ANSI C code? dec 7 '11
5 Simple indexing in a cell array oct 15 '10 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065910.17/warc/CC-MAIN-20150827025425-00251-ip-10-171-96-226.ec2.internal.warc.gz | CC-MAIN-2015-35 | 281 | 4 |
https://furhatrobotics.com/conversational-agents/ | code | Embodied Conversational Agents
Furhat as a platform for developing Embodied Conversational Agents (ECAs)
Researchers in artificial intelligence and conversational technologies are embodying conversational applications on social robots to understand the implications of conversational agents in our everyday lives.
Furhat comes with a fully-fledged conversational platform that allows researchers and developers to build full conversational agents with Furhat. But also, the platform comes with easily programmable interfaces that make it straightforward to plug-in any other dialogue system, that is developed in other platforms, directly to Furhat, making it possible to use all the out-of-the-box functionality that comes with the Furhat platform, together with other researcher platforms used to design conversational agents.
- Full-functionality conversational platform
- Easy to connect to other dialogue systems
- Out-of-the-box Speech and Computer Vision systems
- Support for tens of personalities and languages
Book a demo
Our online demos are the nearest thing to meeting Furhat in real-life
View our official product documentation on the Furhat platform
Understand more about the robot, software, SDK developing skills, warranty & support and more | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00711.warc.gz | CC-MAIN-2020-40 | 1,258 | 12 |
https://community.spiceworks.com/topic/109366-task-tracking-software | code | Looking for some opinions on network / team oriented task tracking software. Were not necessarily looking for project management software as much we are the ability to centrally track tasks and projects within our group, update status, add notes, change percentage complete, etc. We do not require bits like time associated, associated costs, etc.
Im not sure if Microsoft Project Server would be overkill for us. We have investigated Project .Net but its pretty plain and ugly. Were looking for something ideally free, more for our own department (14 people). It can be cross platform compatible but windows is preferred. A glorified product that expands on the ability of Tasks (and shared Tasks) within outlook is ideal. Make is network centric, accessible via web and it would be good.
Has anyone spent any time investigating this or use a product that is worthy of mention?
Please let me know. Thanks a lot!
Windows SharePoint Services 3.0, and it's free!
I created a category in Spiceworks call "Task" and then just make them tickets. Not horrible.
+1 for Sharepoint.
The 2010 Foundation (still free) works very well for this. Easy to set up and get going.
Thanks guys. Sharepoint is an option and we do have a rather large deployment, however I was looking for something that accommodate slightly more interaction such as adding notes, etc in a granular easy flowing way. Sharepoint's tasks option doesnt really provide a very nice way of following a nice flow per task in my opinion.
Thanks for the suggestions, I had already come across those sites during my searching. Though some look okay initially they leave some to be desired, especially when hosting your own solution the per user fee gets a bit silly price wise.
Any other suggestions people have?
Dec 31, 2010 at 9:39 UTC
Microsoft project is too complicated is not designed for task tracking. SharePoint will take you a lot of time to set up. It also does not give you a granule level of the task details and follow-up. We use Team Task Manager. It takes a few minutes to set up on your network. It also has the notes sharing feature. Our group uses it daily.
You can download a free trial. http://www.deskshare.com/team-task-management.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112533.84/warc/CC-MAIN-20170822162608-20170822182608-00614.warc.gz | CC-MAIN-2017-34 | 2,210 | 14 |
https://bugzilla.redhat.com/show_bug.cgi?id=46746 | code | Description of Problem:
During an FTP TUI install of Fairfax beta1, the install stopped with a
fatal error initializing swap on device hda5. A second (seemingly)
identical install with the same options then succeeded.
Only happened once. It's possible that the first attempt did something
with swap that caused the second attempt to succeed.
Steps to Reproduce:
1. Choose "Autopartitioning".
2. Choose "Remove all partitions" with hda [*] checked; I believe that this
machine had two primary ext2 partitions (hda1=256MB and hda2=remaining of
8GB) and nothing else (NO swap) when this install started.
3. The auto-selected configuration looked like this:
/dev/hda1 1-1018 7985M ext3 /
/dev/hda2 1019-1042 188M swap
/dev/hda3 1043-1048 47M ext3 /boot
4. Highlight /dev/hda1 and choose "Delete". (hda2 moves to hda1 etc., free
space listed at end of drive)
5. Add 256M / partition (hda1 moves to hda2 etc., / goes on hda1)
6. Add 512M /var partition (hda1 moves to hda2 etc., /var goes on hda1)
7. Add 2048M /usr partition (hda1 moves to hda2 etc., /usr goes on hda1)
8. Add /export partition with "Fill all available" checked (goes at the end
of the disk) -- when all this is finished we have:
/dev/hda1 1-261 2047M ext3 /usr
/dev/hda2 262-326 509M ext3 /var
/dev/hda3 327-259 258M ext3 /
/dev/hda4 360-1048 5404M Extended
/dev/hda5 360-383 188M swap
/dev/hda6 384-389 47M ext3 /boot
/dev/hda7 390-1048 5164M ext3 /export
[I must say that looks really strange to me... is this "intelligent"
partitioning to a purpose, or is it just the order I did things in?]
9. Fill in a bunch more screens and get the "/tmp/install.log" message.
A dialog titled "Error" that reads: "An error occurred trying to
initialize swap on device hda5. This problem is serious, and the install
cannot continue. Press Enter to reboot your system." F3 shows:
* Detected 96M of memory
* Swap attempt of 96M to 192M
The install should begin.
I think the fact that I began the first attempt with two ext2 partitions
and no swap partition(s) (see above) may be relevant? In any case, I did
the same things the second time, and then it worked.
Jeremy, this looks like more failure to reread partition table bugs.
*** This bug has been marked as a duplicate of 46450 *** | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203448.17/warc/CC-MAIN-20190324124545-20190324150545-00473.warc.gz | CC-MAIN-2019-13 | 2,236 | 43 |
http://wiki.sandship.rockbitegames.com/index.php?title=User:DeweyBirks25399&oldid=909752 | code | Hello from Austria. I'm glad to came across you. My first name is Bryan.
I live in a small city called Grillenberg in nothern Austria.
I was also born in Grillenberg 26 years ago. Married in May 2003. I'm working at the the office.
Visit my web blog สมัครสมาชิก Mawinbet | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00794.warc.gz | CC-MAIN-2022-40 | 292 | 4 |
https://desercik.eu/guides/forge-of-empires/ | code | How can I unlock the Guild Battlegrounds? Why the difference in rewards is so close between first and last places? If I move to the next Age while a Battleground is running, which goods and units will be demanded? What happens when a Guild is disbanded? Why inactive guilds are
Can I rotate buildings? How do I gain population? How can I cancel productions? How can I cancel the construction of a building? Can I store buildings in the inventory? My citizens are unhappy. What should I do? How can I upgrade my buildings to the next age(s)? Can I
The Antiques Dealer aims to help players use items in their inventory which have succumbed to the dust of ages past! The Antiques Dealer loves to collect artifacts from Forge’s history, and will pay a pretty penny for the rarest finds! | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00439.warc.gz | CC-MAIN-2024-10 | 784 | 3 |
https://community.looker.com/embedding-looker-powered-by-looker-75/creating-a-proof-of-concept-embedded-dashboard-powered-by-looker-824 | code | If you’re interested in testing out Looker’s Powered by Looker embedded analytics functionality you can do so as an existing customer by following these proof of concept instructions. If you run into any trouble reach out to us on in-app chat or by visiting help.looker.com and we’d be more than happy to help!
In the example below we are using a fictitious e-commerce store that has many brands(suppliers) where we want to surface analytics to each supplier(will need to restrict data access based on the brand name tied to transactional data).
Create New LookML Model File
Create a new LookML model file called ‘embed’ in your current LookML project. There should be no need to create a new project.
Copy Existing Explore
Copy an existing explore from your standard model file in order to test with - we will use one that explores the transactions that will be used to power queries to be used in Dashboards and Looks.
Provision Your User Account With Wildcard Access
Go to the admin panel and create a new user attribute named brand. You can learn more about user attributes on this page in docs. In this example you will see the default value is set to % - the wildcard for string type user attributes.
Create Test User Account
Create a ‘fake user’ that has the brand user attribute set to some value that isn’t a wild card. You will sudo as this user to test out how the filters are applied. To set the user attribute to something other than the default, you can either go to edit the user after creation, or set user values on the user attribute page (shown below).
In this example the test account will only have access to data where products.brand = ‘Allegra K’
Add Access Filters to the Explore
Now that the user attribute is created and you have different attribute values applied to different users (your user vs the fake user), add the ‘access_filter’ parameter to the explore in your new model. For the field parameter, you will include the view.field model reference that the access filter should use for all SQL generated from this explore. This should be a field that is defined in the explore’s view file, or one of the views that have been joined to the explore.
In this example it must be a LookML field defined in either order_items, inventory_items, users or products view files. In this example, we apply an access filter to all queries sourced from the order_items explore applied to the brand field from the products view.
Notice above that we also included the user attribute name in the access filter. This will equate a users brand user attribute value to the products.brand field.
Test User Account
You can start exploring and testing out queries as the embed user to see if the filters are applied as specified. There should be an addition to all SQL that’s generated that includes a WHERE statement limiting the query based on the field and associated filter value. You can view this by clicking the SQL tab in the Explore window after running a query:
Admin SQL: User attribute to Wildcard(% symbol):
Fake User: User attribute sent to Allegra K
Create or Test With Copy of Existing Dashboard
If you’re interested in testing a dashboard that uses Looks based on this explore you can create a copy just for this embed purpose. If you want to create a new dashboard I would do so first as the admin user. Create looks, add them to the dashboard, then sudo as the embed user and view the dashboard from their context. It should be filtered just for their data. View our docs for more information on creating dashboards.
Here’s the view from our embed.looker.com application for the logged in embedded user, in this case the user is representing the brand Allegra K:
Embedded Demo Links
Note: Anytime you would like to see what the embedded version of a dashboard looks like you can add /embed just before the ‘dashboards’ portion of the URL in your Looker instance.
“https://[your Looker endpoint]/dashboards/thelook/1_business_pulse”
“https://[your Looker endpoint]/embed/dashboards/thelook/1_business_pulse”
Visiting the embed/dashboards link will load the dashboard as an embedded dashboard.
SSO Embedding and Authentication
Use of Looker’s SSO embedding option can allow embedding of dashboards to a set of specified external users. Your application would manage the creation and permissioning users in a programatic and scalable way, removing the requirement of creating and maintaining these users in Looker.
If you require a more advanced or customizable embedded solution (SSO with passive login), please contact your Looker account manager for more details, or visit help.looker.com.
You can find example code to use the Ruby Embed SSO API in various languages here. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00233.warc.gz | CC-MAIN-2023-14 | 4,748 | 31 |
http://www.sevenforums.com/network-sharing/157323-internet-stoped-after-while.html | code | It could frankly be any number of things, but here is a list of what I find to be the most common stuff with mine.
Btw, over here, "wireless broadband" is called "mobile broadband" so I'll be calling it that, so I don't confuse myself
Firstly, I actually had that particular modem at one point. I found it to be slow and unreliable. Now I may have just had a dodgy one, my friend also had one, and had the same issues, so I doubt it;s a fluke but it's something to consider. I managed to convince my provider to replace it with a better one, (the ZTE MF112, which ironically is actually newer, despite the lower number)
I find that with any mobile broadband connection, especially at peak times (evenings/weekends) the connection has a habit of just occasionally crapping out. Basically the mobile "cell" (the antennae that the mobile connection is made through) can only handle so much, and if lots of people are using their phones/dongles (modems) then it overloads, and you have to restart the connection to get any sensible speed out of it. This is why I asked about torrenting, torrenting overloads the connection very quickly. Disconnecting/reconnecting fixes it in most cases. This is actually mainly due to contention: Contention - What it is, how it works. Or "Why is my Internet slow?"
Next thing to consider is latency (response times, or "ping". Mobile broadband has a terrible ping. I have honestly never seen anything short of 150ms even from a relatively local site like BBC. Slow response times, unfortunately mean a slow connection. At peak times I have seen this go up to 300ms or even more. This obviously adds on to the problem that I described above.
Of course it is possible that it's a modem problem. The first thing I would suggest is to uninstall the driver and reinstall it. First uninstall the connection software using Revo: Download Revo Uninstaller Freeware - Free and Full Download - Uninstall software, remove programs, solve uninstall problems
(the free version will be more than adequate). This will make sure that any left over bits will also be deleted. One thing to note about Revo is that if it finds leftover registry keys, the one's in bold are safe to delete, and the one's that aren't should be left.
Once it's been uninstalled, just check Device Manager, (Control Panel>Hardware and Sound>Device Manager) to check it's uninstalled the modem too, if it hasn't, right click it and then click "Uninstall" then unplug the modem, plug it back in and reinstall it.
I would also seriously recommend not using the connection manager. On my machine at least, it has a detrimental effect on my connection. Once you have connected once, there is no need to run it, the connection will be listed under the standard network connections list on the bottom right of the taskbar (the thing that looks like a computer with a big red X on it)
If none of that helps, I would suggest getting onto your provider. THey will probably run you through a load of pointless diagnostics, but they may also after all that replace the modem for something better, or at least be able to give you an idea of what's going on. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698238192/warc/CC-MAIN-20130516095718-00022-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 3,135 | 10 |
https://xmlbeans.apache.org/docs/2.6.0/guide/conMethodsForGeneratedJavaTypes.html | code | Methods for Types Generated From Schema
As you may have seen in Getting Started with XMLBeans, you use the types generated from schema to access XML instances based on the schema. If you're familiar with the JavaBeans technology, the conventions used in the generated API will be recognizable.
In general, elements and attributes are treated as "properties" in the JavaBeans sense. In other words, as you would with JavaBeans properties, you manipulate parts of the XML through accessor methods such as getCustomer() (where you have a "customer" element), setId(String) (where you have an "id" attribute), and so on. However, because schema structures can be somewhat complex, XMLBeans provides several other method styles for handling those structures in XML instances.
Several methods are generated for each element or attribute within the complex type. This topic lists each method that could be generated for a given element or attribute.
Note that whether or not a given method is generated is based on how the element or attribute is defined in schema. For example, a customer element definition with a maxOccurs attribute value of 1 will result in a getCustomer method, but not a getCustomerArray method — after all, only one customer element is possible in an instance document.
Note, too, that there may be two sets of parallel methods: one whose prototype starts with an "x". An "x" method such as xgetName or xsetName would be generated for elements or attribute whose type is a simple type. A simple type may be one of the 44 built-in simple types or may be a restriction in schema of one of those built-in types. Of course, an attribute will always be of a simple type. For built-in simple types, an "x" method will get or set one of the types provided with XMLBeans, such as XmlString, XmlInteger, XmlGDay, and so on. For derived types, the "x" method will get or set a generated type.
Methods generated for elements or attributes that allow a single occurrence. An element is singular if it was declared with maxOccurs="1". An attribute is singular if it was not declared with use="prohibited".
Type getFoo() void setFoo(Type newValue)
Returns or sets the value of Foo. Generated when Foo is an attribute, or is an element that can occur only once as a child.
XmlType xgetFoo() void xsetFoo(XmlType newValue)
Returns or sets the value of Foo as an XMLBean simple type. These methods are generated if Foo's type is defined in schema as a simpleType.
boolean isNilFoo() void setNilFoo()
Determines or specifies whether the Foo element is nil (in other words, "null" in schema terms), meaning it can be empty. A nil element looks something like this:<foo/>These methods are only generated when an element type is declared as nillable in schema — it has a nillable="true" attribute.
Adds a new Foo as an XMLBean simple to the document, or returns Foo's value if one exists already.
boolean isSetFoo() void unSetFoo()
Determines whether the Foo element or attribute exists in the document; removes Foo. These methods are generated for elements and attributes that are optional. In schema, and optional element has an minOccurs attribute set to "0"; an optional attribute has a use attribute set to "optional".
Methods generated for elements that allow multiple occurrences.
An element may occur multiple times if it has a maxOccurs attribute set to "unbounded" or greater than 1. Attributes can't occur multiple times.
Type getFooArray() void setFooArray(Type newValue)
Returns or sets all of the Foo elements.// Get an array of the all of the purchase-order elements item children. Item items = myPO.getItemArray();
Type getFooArray(int index) void setFooArray(Type newValue, int index)
Returns or sets the Foo child element at the specified index.// Sets the value of the third item child element. myPO.setItem(newItem, 2);
Returns the number of Foo child elements.// Returns the number of item child elements. int itemCount = myPO.sizeOfItemArray();
void removeFoo(int index)
Removes the Foo child element at the specified index.
XmlType xgetFooArray() void xsetFooArray(XmlType arrayOfNewValues)
Returns or sets all of the Foo elements as XMLBeans simple types. Generated only when the Foo element is defined as a simple type./* * Returns values of all the phone child elements of an employee element, * where the phone element has been defined as xs:string. */ XmlString empPhones = currentEmployee.xGetPhoneArray();
XmlType xgetFooArray(int index) void xsetFooArray(int index, XmlType newValue)
Returns or sets the Foo element at the specified index, using an XMLBeans simple type value. Generated for an element defined as a simple type in schema.
void insertFoo(int index, FooType newValue)
Inserts the specified Foo child element at the specified index.
void addFoo(FooType newValue)
Adds the specified Foo to the end of the list of Foo child elements.
XmlType insertNewFoo(int index)
Inserts a new Foo at the specified index, returning an XMLBeans simple type representing the new element; returns the existing Foo if there's already one at index.
Adds a new Foo element to the end of the list of Foo child elements, returning an XMLBeans simple type representing the newly added element.
boolean isNilFooArray(int index) void setNilFooArray(int index)
Determines or specifies whether the Foo element at the specified index is nil. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00553.warc.gz | CC-MAIN-2023-06 | 5,355 | 38 |
https://azure.microsoft.com/en-us/overview/startups/?WT.mc_id=healthagent-acomblog-dahouldi | code | Azure for startups
Run lean, stay agile, and grow fast
- Choose your platform, including open source
- Deploy a web app or virtual machine in seconds
- Scale with point-and-click simplicity
- Get started easily with common startup scenarios
- Manage IT budget efficiently
Open source welcome
Azure really is open source friendly and flexible. Use the tools you already know. From Node.js to Ubuntu, bring your favorite open source software tools and technologies to Azure and explore the possibilities. Plus, Azure Marketplace supports your favorite Linux distributions including Debian and SUSE.
Great ideas are why you’ve got a startup, right? Accelerate your innovation with Azure’s application platform. Build simple to complex projects with an easy-to-use and consistent portal experience that offers the cloud services you need most.
Trusted global scale
Startups may start small, but you’ll be ready to grow fast and go far, achieving global scale in 34 local regions. Azure has strict processes and practices to secure the platform, and also empower you to protect your data and mission-critical applications.
Ready to help you grow
From idea to IPO, Microsoft is uniquely equipped to support startups at every stage of your journey. Whether it’s technology enablement, business growth through accelerators and venture funding, or connecting you with customers, learn how we can help you grow. | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00591.warc.gz | CC-MAIN-2022-05 | 1,409 | 14 |
https://javadoc.io/static/org.scalacheck/scalacheck_2.11/1.14.3/org/scalacheck/Properties.html | code | Convenience method that checks the properties with the given parameters (or default parameters, if not specified) and reports the result on the console.
Adds all properties from another property collection to this one with a prefix this is prepended to each included property's name.
Adds all properties from another property collection to this one
Convenience method that makes it possible to use this property collection as an application that checks itself on execution.
Customize the parameters specific to this class.
Returns all properties of this collection in a list of name/property pairs. | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00501.warc.gz | CC-MAIN-2021-31 | 598 | 6 |
https://community.spiceworks.com/topic/486467-ad-replication-between-domains | code | Hi, we currently have two domains in two different international locations...
location1.local in US and location2.co.uk internationally....we are connected through a VPN tunnel....we can share permissions, etc. (example, I can grant a user in the location2.co.uk domain permissions to a file store on a location1.local server, and have a zone for location2.co.uk in the location1.local DNS server and vice versa.
Is there a way to replicate users as well?
Say I create a few users in location1.local, how can I get them to replicate to location2.co.uk?
Is there anything in sites and services I should consider? | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662580803.75/warc/CC-MAIN-20220525054507-20220525084507-00726.warc.gz | CC-MAIN-2022-21 | 611 | 5 |
http://www.shellsec.com/news/10986.html | code | Naive Bayes is a very simple classification algorithm that makes some strong assumptions about the independence of each input variable.
Nevertheless, it has been shown to be effective in a large number of problem domains. In this post you will discover the Naive Bayes algorithm for categorical data. After reading this post, you will know.
- How to work with categorical data for Naive Bayes.
- How to prepare the class and conditional probabilities for a Naive Bayes model.
- How to use a learned Naive Bayes model to make predictions.
This post was written for developers and does not assume a background in statistics or probability. Open a spreadsheet and follow along. If you have any questions about Naive Bayes ask in the comments and I will do my best to answer.
Let’s get started.
Naive Bayes Tutorial for Machine Learning
Photo by Beshef , some rights reserved.
The dataset is contrived. It describes two categorical input variables and a class variable that has two outputs.
Weather Car Class sunny working go-out rainy broken go-out sunny working go-out sunny working go-out sunny working go-out rainy broken stay-home rainy broken stay-home sunny working stay-home sunny broken stay-home rainy broken stay-home
We can convert this into numbers. Each input has only two values and the output class variable has two values. We can convert each variable to binary as follows:
- sunny = 1
- rainy = 0
- working = 1
- broken = 0
- go-out = 1
- stay-home = 0
Therefore, we can restate the dataset as:
Weather Car Class 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0
This can make the data easier to work with in a spreadsheet or code if you are following along.
Learn a Naive Bayes Model
There are two types of quantities that need to be calculated from the dataset for the naive Bayes model:
- Class Probabilities.
- Conditional Probabilities.
Let’s start with the class probabilities.
Calculate the Class Probabilities
The dataset is a two class problem and we already know the probability of each class because we contrived the dataset.
Nevertheless, we can calculate the class probabilities for classes 0 and 1 as follows:
- P(class=1) = count(class=1) / (count(class=0) + count(class=1))
- P(class=0) = count(class=0) / (count(class=0) + count(class=1))
- P(class=1) = 5 / (5 + 5)
- P(class=0) = 5 / (5 + 5)
This works out to be a probability of 0.5 for any given data instance belonging to class 0 or class 1.
Calculate the Conditional Probabilities
The conditional probabilities are the probability of each input value given each class value.
The conditional probabilities for the dataset can be calculated as follows:
Weather Input Variable
- P(weather=sunny|class=go-out) = count(weather=sunny and class=go-out) / count(class=go-out)
- P(weather=rainy|class=go-out) = count(weather=rainy and class=go-out) / count(class=go-out)
- P(weather=sunny|class=stay-home) = count(weather=sunny and class=stay-home) / count(class=stay-home)
- P(weather=rainy|class=stay-home) = count(weather=rainy and class=stay-home) / count(class=stay-home)
Plugging in the numbers we get:
- P(weather=sunny|class=go-out) = 0.8
- P(weather=rainy|class=go-out) = 0.2
- P(weather=sunny|class=stay-home) = 0.4
- P(weather=rainy|class=stay-home) = 0.6
Car Input Variable
- P(car=working|class=go-out) = count(car=working and class=go-out) / count(class=go-out)
- P(car=broken|class=go-out) = count(car=brokenrainy and class=go-out) / count(class=go-out)
- P(car=working|class=stay-home) = count(car=working and class=stay-home) / count(class=stay-home)
- P(car=broken|class=stay-home) = count(car=brokenrainy and class=stay-home) / count(class=stay-home)
Plugging in the numbers we get:
- P(car=working|class=go-out) = 0.8
- P(car=broken|class=go-out) = 0.2
- P(car=working|class=stay-home) = 0.2
- P(car=broken|class=stay-home) = 0.8
We now have every thing we need to make predictions using the Naive Bayes model.
Make Predictions with Naive Bayes
We can make predictions using Bayes Theorem.
P(h|d) = (P(d|h) * P(h)) / P(d)
- P(h|d) is the probability of hypothesis h given the data d. This is called the posterior probability.
- P(d|h) is the probability of data d given that the hypothesis h was true.
- P(h) is the probability of hypothesis h being true (regardless of the data). This is called the prior probability of h.
- P(d) is the probability of the data (regardless of the hypothesis).
In fact, we don’t need a probability to predict the most likely class for a new data instance. We only need the numerator and the class that gives the largest response, which will be the predicted output.
MAP(h) = max(P(d|h) * P(h))
Let’s take the first record from our dataset and use our learned model to predict which class we think it belongs.
We plug the probabilities for our model in for both classes and calculate the response. Starting with the response for the output “go-out”. We multiply the conditional probabilities together and multiply it by the probability of any instance belonging to the class.
- go-out = P(weather=sunny|class=go-out) * P(car=working|class=go-out) * P(class=go-out)
- go-out = 0.8 * 0.8 * 0.5
- go-out = 0.32
We can perform the same calculation for the stay-home case:
- stay-home = P(weather=sunny|class=stay-home) * P(car=working|class=stay-home) * P(class=stay-home)
- stay-home = 0.4 * 0.2 * 0.5
- stay-home = 0.04
We can see that 0.32 is greater than 0.04, therefore we predict “go-out” for this instance, which is correct.
We can repeat this operation for the entire dataset, as follows:
Weather Car Class out? home? Prediction sunny working go-out 0.32 0.04 go-out rainy broken go-out 0.02 0.24 stay-home sunny working go-out 0.32 0.04 go-out sunny working go-out 0.32 0.04 go-out sunny working go-out 0.32 0.04 go-out rainy broken stay-home 0.02 0.24 stay-home rainy broken stay-home 0.02 0.24 stay-home sunny working stay-home 0.32 0.04 go-out sunny broken stay-home 0.08 0.16 stay-home rainy broken stay-home 0.02 0.24 stay-home
If we tally up the predictions compared to the actual class values, we get an accuracy of 80%, which is excellent given that there are conflicting examples in the dataset.
Get your FREE Algorithms Mind Map
Sample of the handy machine learning algorithms mind map.
I’ve created a handy mind map of 60+ algorithms organized by type.
Download it, print it and use it.
Also get exclusive access to the machine learning algorithms email mini-course.
In this post you discovered exactly how to implement Naive Bayes from scratch. You learned:
- How to work with categorical data with Naive Bayes.
- How to calculate class probabilities from training data.
- How to calculate conditional probabilities from training data.
- How to use a learned Naive Bayes model to make predictions on new data.
Do you have any questions about Naive Bayes or this post.Ask your question by leaving a comment and I will do my best to answer it.
Need Help Getting Past The Math?
Finally understand how machine learning algorithms work, step-by-step in the new Ebook:
Take the next step with 12 self-study tutorials across
10 top machine learning algorithms.
Includes spreadsheets that show exactly how everything is calculated.
Ideal for beginners with no math background. | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00042.warc.gz | CC-MAIN-2018-09 | 7,242 | 97 |
https://github.com/NixOS/nixpkgs/pull/97075 | code | Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
Motivation for this change
I didn't figure out yet how to set the data directory so I wasn't able to test it with a browser. Maybe @ngerstle can help test. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00312.warc.gz | CC-MAIN-2020-45 | 313 | 4 |
https://lists.uni-koeln.de/pipermail/linux-fai/2018-January/011895.html | code | setup-storage fails on blank disk
wfai at parplies.de
Wed Jan 3 17:28:36 CET 2018
Andreas Heinlein wrote on 2018-01-03 13:53:40 +0100 [setup-storage fails on blank disk]:
> I have encountered a problem with setup-storage which occurs only when
> the disk is blank, i.e. wiped with nwipe/dban or brand new. It then
> fails on creating the LVM; running 'pvcreate' returns 'cannot open
> /dev/sda5 exclusively'.
this is probably unrelated, but is there any reason to put the LVM PV inside
a "logical" volume? DOS extended partitions seem to be the worst hack ever
invented to get around a limitation in a bad design, yet they repeatedly
and apparently unnecessarily pop up in quoted disk_configs:
> This is your disk_config file:
> # generic disk configuration for one small disk
> # disk size from 500Mb up to what you can buy today
> # <type> <mountpoint> <size in mb> <fstype> <mount options> [extra options]
> disk_config disk1 disklabel:msdos bootable:1 preserve_lazy:6 align-at:1M fstabkey:uuid
> primary /boot 300 ext4 rw createopts="-O ^64bit,^metadata_csum"
> logical - 29500-30000 - -
> logical /media/daten 1024- ext4 acl createopts="-O ^64bit,^metadata_csum -L Daten"
I count three partitions, which would work perfectly with primary partitions
(furthermore, you are using LVM to have an arbitrary number of named and
dynamic "volumes" (i.e. partitions) anyway, so if you needed more, LVM would
be the superior mechanism; of course, your specific requirements may vary).
Ok, you are preserving a logical partition, so in this particular case you'd
actually need to stick with logical partitions, but the partition in question
is ext4, not FAT-based, so it doesn't appear to be a legacy Windoze issue.
My point: am I missing something, and there is some obscure benefit of putting
an LVM container within an extended-partition-container (such as hiding it
from something), or is it simply a common misconception that you for some
reason cannot or should not put an LVM PV (or even several individual native
Linux partitions - such as /, /var and /tmp) into primary partitions -
assuming you only need upto four of them (and, obviously, assuming you are
still using MSDOS partition tables)?
Or, differently: for a *blank disk*, you obviously won't be preserving sda6,
and you probably aren't referencing it by partition number ("fstabkey:uuid"),
so does using 'primary' instead of 'logical' for all three partitions change
anything concerning the error you are experiencing?
Hope that helps someone (perhaps me ;-) ...
More information about the linux-fai | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829568.86/warc/CC-MAIN-20181218184418-20181218210418-00179.warc.gz | CC-MAIN-2018-51 | 2,562 | 40 |
https://forum.uipath.com/t/custom-component-checkbox-as-arguement/225515 | code | I am Building a Library using uipath studio. I require a checkBox as an Input to the component. How do we achieve that.
Below is what i Tried in the arguements Pane of Library
For all the Above , After Publishing the Component, Below is what i get in the Custom activity
I Want to have a CheckBox as available for private.
Please Help ! | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00619.warc.gz | CC-MAIN-2021-43 | 336 | 5 |
http://www.everybodytaifungtonight.com/2007/08/bot-wub.html | code | Sunami Jen tells you, "have two scrolls for ya"
Sunami is a bot. Bots talk to me sometimes. ;-)
Anyway -- woot! Just got two more scrolls for my attempt to learn a full array of Creatures Self 7 spells. I shop with Sunami a lot for scrolls, and this owner of the bot has been making a point to find me scrolls I need, based on my !search inputs. Gotta love businesspeople! ;-) | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502097.77/warc/CC-MAIN-20200605143036-20200605173036-00238.warc.gz | CC-MAIN-2020-24 | 376 | 3 |
https://db0nus869y26v.cloudfront.net/en/KDE | code | |Founded||14 October 1996|
|Products||KDE Plasma, KDE Frameworks, KDE Applications, Calligra Suite, Krita, KDevelop, digiKam, Amarok, Kirigami, and many more|
|Method||Artwork, development, documentation, promotion, and translation.|
KDE is an international free software community that develops free and open-source software. As a central development hub, it provides tools and resources that allow collaborative work on this kind of software. Well-known products include the Plasma Desktop (the default desktop environment on many Linux distributions), KDE Frameworks, and a range of cross-platform applications such as Amarok, digiKam, and Krita that are designed to run on Unix and Unix-like operating systems, Microsoft Windows, and Android.
KDE was founded in 1996 by Matthias Ettrich, a student at the University of Tübingen. At the time, he was troubled by certain aspects of the Unix desktop. Among his concerns was that none of the applications looked or behaved alike. In his opinion, desktop applications of the time were too complicated for end users. In order to solve the issue, he proposed the creation of a desktop environment in which users could expect the applications to be consistent and easy to use. His initial Usenet post spurred significant interest, and the KDE project was born.
The name KDE was intended as a wordplay on the existing Common Desktop Environment, available for Unix systems. CDE was an X11-based user environment jointly developed by HP, IBM, and Sun through the X/Open consortium, with an interface and productivity tools based on the Motif graphical widget toolkit. It was supposed to be an intuitively easy-to-use desktop computer environment. The K was originally suggested to stand for "Kool", but it was quickly decided that the K should stand for nothing in particular. Therefore, the KDE initialism expanded to "K Desktop Environment" before it was dropped altogether in favor of simply KDE in a rebranding effort.
In the beginning Matthias Ettrich chose to use Trolltech's Qt framework for the KDE project. Other programmers quickly started developing KDE/Qt applications, and by early 1997, a few applications were being released. On 12 July 1998 the first version of the desktop environment, called KDE 1.0, was released. The original GPL licensed version of this toolkit only existed for platforms which used the X11 display server, but with the release of Qt 4, LGPL licensed versions are available for more platforms. This allowed KDE software based on Qt 4 or newer versions to theoretically be distributed to Microsoft Windows and OS X.
The KDE Marketing Team announced a rebranding of the KDE project components on 24 November 2009. Motivated by the perceived shift in objectives, the rebranding focused on emphasizing both the community of software creators and the various tools supplied by the KDE, rather than just the desktop environment.
What was previously known as KDE 4 was split into KDE Plasma Workspaces, KDE Applications, and KDE Platform (now KDE Frameworks) bundled as KDE Software Compilation 4. Since 2009, the name KDE no longer stands for K Desktop Environment, but for the community that produces the software.
|14 October 1996||KDE development announced|
|K Desktop Environment 1||12 July 1998|
|K Desktop Environment 2||23 October 2000|
|K Desktop Environment 3||3 April 2002|
|KDE Software Compilation 4||11 January 2008|
|KDE Plasma 5||15 July 2014||former KDE/KDE SC split into KDE Plasma, KDE Frameworks and KDE Applications|
Main article: KDE Projects
The KDE community maintains multiple free-software projects. The project formerly referred to as KDE (or KDE SC (Software Compilation)) nowadays consists of three parts:
KDE neon is a software repository that uses Ubuntu LTS as a core. It aims to provide the users with rapidly updated Qt and KDE software, while updating the rest of the OS components from the Ubuntu repositories at the normal pace. KDE maintains that it is not a "KDE distribution", but rather an up-to-date archive of KDE and Qt packages.
WikiToLearn, abbreviated WTL, is one of KDE's newer endeavors. It is a wiki (based on MediaWiki, like Wikipedia) that provides a platform to create and share open source textbooks. The idea is to have a massive library of textbooks for anyone and everyone to use and create. Its roots lie in the University of Milan, where a group of physics majors wanted to share notes and then decided that it was for everyone and not just their internal group of friends. They have become an official KDE project with several universities backing it.
Developing KDE software is primarily a volunteer effort, although various companies, such as Novell, Nokia,[failed verification] or Blue Systems employ or employed developers to work on various parts of the project. Since a large number of individuals contribute to KDE in various ways (e.g. code, translation, artwork), organization of such a project is complex. A mentor program helps beginners to get started with developing and communicating within KDE projects and communities.
Communication within the community takes place via mailing lists, IRC, blogs, forums, news announcements, wikis and conferences. The community has a Code of Conduct for acceptable behavior within the community.
Currently the KDE community uses the Git revision control system. The KDE GitLab Instance (named Invent) gives an overview of all projects hosted by KDE's Git repository system. Phabricator is used for task management.
On 20 July 2009, KDE announced that the one millionth commit has been made to its Subversion repository. On 11 October 2009, Cornelius Schumacher, a main developer within KDE, wrote about the estimated cost (using the COCOMO model with SLOCCount) to develop KDE software package with 4,273,291 LoC, which would be about US$175,364,716. This estimation does not include Qt, Calligra Suite, Amarok, digiKam, and other applications that are not part of KDE core.[clarification needed]
The overall direction is set by the KDE Core Team. These are developers who have made significant contributions within KDE over a long period of time. This team communicates using the kde-core-devel mailing list, which is publicly archived and readable, but joining requires approval. KDE does not have a single central leader who can veto important decisions. Instead, the KDE core team consists of several dozens of contributors who make decisions not by a formal vote, but through discussions.
The developers also organize alongside topical teams.[clarification needed] For example, the KDE Edu team develops free educational software. While these teams work mostly independent and do not all follow a common release schedule. Each team has its own messaging channels, both on IRC and on the mailing lists.
A KDE Patron is an individual or organization supporting the KDE community by donating at least 5000 Euro (depending on the company's size) to the KDE e.V. As of October 2017, there are six such patrons: Blue Systems, Canonical Ltd., Google, Private Internet Access, SUSE, and The Qt Company.
The KDE community's mascot is a green dragon named Konqi. Konqi's appearance was officially redesigned with the coming of Plasma 5, with Tyson Tan's entry (seen in the images) winning the redesign competition on the KDE Forums.
Katie is a female dragon. She was presented in 2010 and is appointed as a mascot for the KDE women's community.
Other dragons with different colors and professions were added to Konqi as part of the Tyson Tan redesign concept. Each dragon has a pair of letter-shaped antlers that reflect their role in the KDE community.
Kandalf the wizard was the former mascot for the KDE community during its 1.x and 2.x versions. Kandalf's similarity to the character of Gandalf led to speculation that the mascot was switched to Konqi due to copyright infringement concerns, but this has never been confirmed by KDE.
The financial and legal matters of KDE are handled by KDE e.V., a German non-profit organization. Among others, it owns the KDE trademark and the corresponding logo. It also accepts donations on behalf of the KDE community, helps to run the servers, assists in organizing and financing conferences and meetings, but does not influence software development directly.
In many countries, KDE has local branches. These are either informal organizations (KDE India) or like the KDE e.V., given a legal form (KDE France). The local organizations host and maintain regional websites, and organize local events, such as tradeshows, contributor meetings and social community meetings.
KDE has community identity guidelines (CIG) for definitions and recommendations which help the community to establish a unique, characteristic, and appealing design. The KDE official logo displays the white trademarked K-Gear shape on a blue square with mitred corners. Copying of the KDE Logo is subject to the LGPL. Some local community logos are derivations of the official logo.
Many KDE applications have a K in the name, mostly as an initial letter. The K in many KDE applications is obtained by spelling a word which originally begins with C or Q differently, for example Konsole and Kaffeine, while some others prefix a commonly used word with a K, for instance KGet. However, the trend is not to have a K in the name at all, such as with Stage, Spectacle, Discover and Dolphin.
On 23 June 2005, chairman of the Wikimedia Foundation announced that the KDE community and the Wikimedia Foundation have begun efforts towards cooperation. Fruits of that cooperation are MediaWiki syntax highlighting in Kate and accessing Wikipedia content within KDE applications, such as Amarok and Marble.
On 4 April 2008, the KDE e.V. and Wikimedia Deutschland opened shared offices in Frankfurt. In September 2009 KDE e.V. moved to shared offices with Free Software Foundation Europe in Berlin.
In May 2006, KDE e.V. became an Associate Member of the Free Software Foundation Europe (FSFE).
On 22 August 2008, KDE e.V. and FSFE jointly announced that after working with FSFE's Freedom Task Force for one and a half years KDE adopts FSFE's Fiduciary Licence Agreement. Using that, KDE developers can – on a voluntary basis – assign their copyrights to KDE e.V.
In September 2009, KDE e.V. and FSFE moved into shared offices in Berlin.
Several companies actively contribute to KDE, like Collabora, Erfrakon, Intevation GmbH, Kolab Konsortium, Klarälvdalens Datakonsult AB (KDAB), Blue Systems, and KO GmbH.
Nokia used Calligra Suite as base for their Office Viewer application for Maemo/MeeGo. They have also been contracting KO GmbH to bring MS Office 2007 file format filters to Calligra. Nokia also employed several KDE developers directly – either to use KDE software for MeeGo (e.g. KCal) or as sponsorship.
The software development and consulting companies Intevation GmbH of Germany and the Swedish KDAB use Qt and KDE software – especially Kontact and Akonadi for Kolab – for their services and products, therefore both employ KDE developers.
KDE participates in freedesktop.org, an effort to standardize Unix desktop interoperability.
In 2009 and 2011, GNOME and KDE co-hosted their conferences Akademy and GUADEC under the Desktop Summit label.
In December 2010 KDE e.V. became a licensee of the Open Invention Network.
Many Linux distributions and other free operating systems are involved in the development and distribution of the software, and are therefore also active in the KDE community. These include commercial distributors such as SUSE/Novell or Red Hat but also government-funded non-commercial organizations such as the Scientific and Technological Research Council of Turkey with its Linux distribution Pardus.
In October 2018, Red Hat declared that KDE Plasma was no longer supported in future updates of Red Hat Enterprise Linux, though it continues to be part of Fedora. The announcement came shortly after the announcement of the business acquisition of Red Hat by IBM for close to US$43 billion. As a result, Fedora now makes KDE Plasma and other KDE software available also to Red Hat Enterprise Linux users through their Extra Packages for Enterprise Linux (EPEL) project.
The two most important conferences of KDE are Akademy and Camp KDE. Each event is on a large scale, both thematically and geographically. Akademy-BR and Akademy-es are local community events.
Akademy is the annual world summit, held each summer at varying venues in Europe. The primary goals of Akademy are to act as a community building event, to communicate the achievements of community, and to provide a platform for collaboration with community and industry partners. Secondary goals are to engage local people, and to provide space for getting together to write code. KDE e.V. assist with procedures, advice and organization. Akademy including conference, KDE e.V. general assembly, marathon coding sessions, BOFs (birds of a feather sessions) and social program. BOFs meet to discuss specific sub-projects or issues.
The KDE community held KDE One that was first conference in Arnsberg, Germany, in 1997 to discuss the first KDE release. Initially, each conference was numbered after the release, and not regular held. Since 2003 the conferences were held once a year. And they were named Akademy since 2004.
The yearly Akademy conference gives Akademy Awards, are awards that the KDE community gives to KDE contributors. Their purpose is to recognize outstanding contribution to KDE. There are three awards, best application, best non-application and jury's award. As always the winners are chosen by the winners from the previous year. First winners received a framed picture of Konqi signed by all attending KDE developers.
|2009||Negril, Jamaica||17–18 January|
|2010||La Jolla, US||15–22 January|
|2011||San Francisco, US||4–5 April|
Camp KDE is another annual contributor's conference of the KDE community. The event provides a regional opportunity for contributors and enthusiasts to gather and share their experiences. It is free to all participants. It is intended to ensure that KDE in the world is not simply seen as being Euro-centric. The KDE e.V. helps travel and accommodation subsidies for presenters, BoF leaders, organizers or core contributor. It is held in the North America since 2009.
In January 2008, KDE 4.0 Release Event was held at the Google headquarters in Mountain View, California, US, to celebrate the release of KDE SC 4.0. The community realized that there was a strong demand for KDE events in the Americas, therefore Camp KDE was produced.
Camp KDE 2009 was the premiere meeting of the KDE Americas, was held at the Travellers Beach Resort in Negril, Jamaica, sponsored by Google, Intel, iXsystem, KDE e.V. and Kitware. The event included 1–2 days of presentations, BoF meetings and hackathon sessions. Camp KDE 2010 took place at the University of California, San Diego (UCSD) in La Jolla, US. The schedule included presentations, BoFs, hackathons and a day trip. It started with a short introduction by Jeff Mitchell, who was the principal organizer of the conference, talked a bit of history about Camp KDE and some statistics about the KDE community. The talks of the event were relatively well attended, and an increase over the previous year to around 70 people. On 1/19, the social event was a tour of a local brewery. Camp KDE 2011 was held at Hotel Kabuki in San Francisco, US, was co-located with the Linux Foundation Collaboration Summit. The schedule included presentations, hackathons and a party at Noisebridge. The conference opened with an introduction spoken by Celeste Lyn Paul.
Season of KDE is an outreach program hosted by the KDE community. Students are appointed mentors from the KDE community that help bring their project to fruition.
conf.kde.in was the first KDE and Qt conference in India. The conference, organized by KDE India, was held at R.V. College of Engineering in Bangalore, India. The first three days of the event had talks, tutorials, and interactive sessions. The last two days were a focused code sprint. The conference was opened by its main organizer, Pradeepto Bhattacharya. Over 300 people were at the opening talks. The Lighting of the Auspicious Lamp ceremony was performed to open the conference. The first session was by Lydia Pintscher, who spoke on "So much to do – so little time". At the event, the return of Project Neon was announced on March 11, 2011, with the project providing nightly builds of the KDE Software Compilation. Closing the conference was keynote speaker and old-time KDE developer Sirtaj.
Día KDE (KDE Day) is an Argentinian event focused on KDE. It gives talks and workshops. The purposes of the event are to: spread the free software movement among the population of Argentina, bringing to it the KDE community and environment developed by it; know and strengthen KDE-AR; and generally bring the community together to have fun. The event is free.
A Release party is a party, which celebrates the release of a new version of the KDE SC (twice a year). KDE also participates in other conferences that revolve around free software.
Brazil's primary school education system operates computers running KDE software, with more than 42,000 schools in 4,000 cities, thus serving nearly 52 million children. The base distribution is called Educational Linux, which is based on Kubuntu. Besides this, thousands more students in Brazil use KDE products in their universities. KDE software is also running on computers in Portuguese and Venezuelan schools, with respectively 700,000 and one million systems reached.
Through Pardus, a local Linux distribution, many sections of the Turkish government make use of KDE software, including the Turkish Armed Forces, Ministry of Foreign Affairs, Ministry of National Defence, Turkish Police, and the SGK (Social Security Institution of Turkey), although these departments often do not exclusively use Pardus as their operating system.
CERN (European Organization for Nuclear Research) is using KDE software.
Germany uses KDE software in its embassies around the world, representing around 11,000 systems.
NASA used the Plasma Desktop during the Mars Mission.[non-primary source needed]
Valve Corporation's handheld gaming computer, the Steam Deck, uses the KDE Plasma desktop environment when in desktop mode.
((cite web)): CS1 maint: bot: original URL status unknown (link). H-online.com (2013-04-14). Retrieved on 2013-07-17. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510149.21/warc/CC-MAIN-20230926043538-20230926073538-00418.warc.gz | CC-MAIN-2023-40 | 18,488 | 68 |
https://laraveldevelopmentcompanyindia.com/blog/codeigniter-vs-laravel/ | code | CodeIgniter vs Laravel – Which one is Better?
August 10, 2020
Developers today often have to build highly complicated web applications and web portals. While it may seem part of a job, the reality is that after a certain level of intricacy, developing websites and apps can prove quite a hassle and take much of the involved stakeholder’s time. It has led to the requirements for establishing a structured approach of development which can be best catered by PHP frameworks.
However, programmers face a tricky query when it comes to PHP frameworks: Which technology or development structure should they select for their upcoming projects? It is essential developers choose a framework that fits the client’s business and technology needs; otherwise, the basic foundation of the site or web application will not be strong. While there are some options obtainable, the preference is often amid Laravel and CodeIgniter. So, allow us to recommend some use of CodeIgniter and some use of Laravel for making the right decision.
What is Laravel?
It is an effective open-source platform extensively used and applied as a PHP framework. The platform is developed to enable web application by utilizing MVC architectural patterns. The platform was released and launched under the MIT license. So, its source code is rightly hosted on GitHub. It is a trustworthy PHP framework as it follows significant and precise language rules. So LDCI is the best laravel company in India for your requirements.
What is CodeIgniter?
It is a consistent PHP framework and is developed for those developers who like an easy to work and well-designed toolkit to build fully-featured applications. CodeIgniter is one of the preferred options for building dynamic web portals utilizing PHP.
It offers freedom to the users as they don’t require relying right on the MVC development pattern. Besides, it facilitates 3rd party plugins which can be practical to put into practice multifaceted functionalities. It also provides effective security and encryption processes.
Difference between CodeIgniter and Laravel
|Built-in Modules||Developed with built-in modularity functionalities enabling developers to separate a project into smaller modules by the bundle.||Don’t provide built-in modularity functionalities. Hence, developers are required to generate and sustain modules through Modular Extension.|
|Template Engine and API Building||It facilitates straightforward but robust template engines such as Blade. Blade template engine enables PHP programmers to easily optimize the performance of web applications through enhancement of views.||Don’t provide a built-in template engine. The developers are needed to integrate it with a template engine tool such as smarty. This assists in conducting familiar tasks and perks up the performance of websites.|
|Template Language||Blade Template Engine||PHP Proprietary|
|Database Model||Relational Object Oriented||Object Oriented|
|Development Paradigm||Component Oriented||Object Oriented Event Driven Functional|
|Backing of other DBMS||Backs Microsoft SQL Server, ORACLE, MYSQL, IBM DB2, PostgreSQL Orientdb, and JDBC compatible.||Backs PostgreSQL, MySQL, Microsoft BI, & MongoDB. However, CodeIgniter also backs DB2, Microsoft SQL Server, Oracle and others.|
|Popularity & current trends||Laravel is extremely popular. With its significant coding style it is favored by experienced developers.||It provides ease of use in 2.x, and consequently, web developers choose CodeIgniter.|
|Structure and Updates||It follows the required MVC structure of filing. And it enables the command-line tool known as the Artisan.||It comes with MVC structure and offers effortless on-boarding. The structure was loosely dependent on Object-oriented Programming. Nevertheless, many developers utilized it as per requirements.|
|Sustain RESTful API||RESTful Controllers facilitates developers to produce a range of REST APIs without investing added time.||CodeIgniter does not allow rationalized development of REST APIs.|
|Online assistance & Libraries||Provide official documentation which is extremely detailed and useful. You can also get added assistance from Laracast.com.||Provide built-in functionalities, and their web portal comes with a helpful guide that can be applied without much prior acquaintance.|
|HTTP Backing||It enables developers to specify tailored HTTPs routes. The programmers can also build a URL for every HTTPS route.||It does not back HTTPS entirely. So, programmers can utilize URLs to keep the data transmission safe by generating pats.|
|Authentication||The authentication Class feature offered by Laravel makes it simpler for programmers to enable authentication and its rules.||It doesn’t come with built-in authentication functions. Developers require authenticating and authorizing users by writing tailored extensions.|
|Unit Testing||It enables developers to check-out the app code meticulously and constantly with the assistance of PHPUnit.||Don’t have in-built testing tools. So, programmers are required to utilize added unit testing tools to enable the quality of the app and code.|
|Learning Curve||Provides many added features which are tough to learn for beginners.||The beginners find it simpler to learn and utilize.|
|Stack Overflow questions||96.7 k||606. k|
|GitHub Stars||45.5 K||16.5 K|
|Average Salary||The average salary for Laravel developers isiht around $71,459 per year.||The average salary for CodeIgniter developers is around $47,753 per year.|
|Well-known Companies utilizing the product||9GAG, Geocodio, Union||Buffer, Webedia, Machester|
Why Utilize Laravel?
Features of Laravel
- Provides version control system that assists with simplified management of migrations
- Facilitates modular packaging along with composer dependency manager
- Backs Eloquent ORM, the modernized ActiveRecord implementation for enablement on DB
- Maintains DBMS platforms such as PostgreSQL, MySQL, and SQLServer
- Provided useful features such as blade templating engine
- Backs artisan command with support of sample codes line interface
- Has excellent documentation
- Enables enforcing of constraints amid DBM objects by modern query builder mechanism
- Has auto-loading function, so you don’t need physical maintenance or inclusion paths
- The framework assists you to create new tools with the assistance of a LOC container
Related Article >> Features of Laravel Frameworks
Why Utilize CodeIgniter?
Features of Codelgniter
- Effective support and instant answers provided by community support
- Enables you to the cached website for enhanced performance and loading times
- Comprehensible and fully structured documentation
- Provides improved stability with required supports
- It provides and delivers an easy routing method
Advantages of Laravel
- Trouble-free to explore and utilize
- Swift application creation and execution
- Easily accessible documents
- Secure and robust performance
- Bundled modularity enables easy reuse of code
- CLI enabled modernized tools are used to carry out vital tasks and migrations
- Blade template feature boosts the speed of engine
- Reverse routing provision is an effective benefit
Disadvantages of Laravel
- Some parts of the platform are not tried and tested efficiently
- Possess minor bugs which need to be resolved by Laravel team
- Legacy system transformation is not simple
- The platform is slow for developers to adapt
- Community support and backing is not as effective as other platforms
- Less experienced programmers face challenges with this platform
Advantages of CodeIgniter
- No requirement for developers to run pages of code and need to program only imperative things as there are numerous resources to support and boost the processes
- Architecture is undemanding and novice developers can work without problems
- Enables MVC framework – Separation of code and its presentation is straightforward
- The input class offers server-side validation effortlessly, and scrubbing of user input is easy to manage
- It is uncomplicated to build readable SQL statements utilizing an active record
Disadvantages of CodeIgniter
- Deficient of composer integration
- Active record is creditable, but there are some scenarios when it is not suited entirely, and the coding requires to be repeated
- Devoid of the authentication method in opposition to the active directory
- Code is not fully compatible with most recent versions of the PHP
Which is the best – CodeIgniter or Laravel? To summarize this, we can say both of these PHP frameworks have their significance and benefits. However, it exclusively relies on your project and which one you should choose. Regardless of that, we can state that Laravel has some edge when matched to CodeIgniter due to its modernized features.
If you have Laravel related project requirements, then connect with us, we will offer you the best Laravel solution. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00631.warc.gz | CC-MAIN-2022-40 | 8,919 | 77 |
https://lists.launchpad.net/maria-developers/msg07887.html | code | maria-developers team mailing list archive
Mailing list archive
Re: PLEASE REVIEWA: pre-requisite patch for the data type plugins, also fixing a number of optimizer bugs
On 11/12/2014 01:28 PM, Sergei Golubchik wrote:
On Nov 03, Alexander Barkov wrote:
While working on pluggable data types, I noticed a few problems
related to optimizer:
MDEV-6950 Bad results with joins comparing DATE/DATETIME and INT/DECIMAL/DOUBLE/ENUM/VARCHAR columns
MDEV-6971 Bad results with joins comparing TIME and DOUBLE/DECIMAL columns
MDEV-6978 Bad results with joins comparing case insensitive VARCHAR/ENUM/SET expression to a _bin ENUM column
MDEV-6982 LEFT JOIN table elimination is not always used when it could
MDEV-6989 BINARY and COLLATE xxx_bin comparisions are not used for optimization in some cases
MDEV-6990 GROUP_MIN_MAX optimization is not applied in some cases when it could
MDEV-6991 GROUP_MIN_MAX optimization is erroneously applied in some cases
In some cases wrong result sets are returned.
So it would be nice to have this fixed in 10.0.
I made these changes as a standalone patch.
It's a big patch that introduces quite a log of new code and new
concepts. What is a "Tool"?
It's an internal protected subclass in Field_optimizer.
Allows to implement all optimization operations
(e.g. ref access, range access, hash join, etc)
in a single virtual method can_optimize() using
a switch on operation type (Tooltype).
Another option would be to have separate 9 *virtual* methods,
one per operation. But as they looks quite similar inside
a particular Field_xxx (Field, Field_temporal, Field_longstr,
Field_geom, Field_enum), I thought having a switch is easier.
I would rather have it in 10.1, not in 10.0.
It was difficult to see what was wrong with the old code, that is, what
were actual bug fixes? Could you explain, please?
Generally, the problem was that the tests that checked if a particular
optimization operation can be applied (in opt_range.cc,
opt_table_elimination.cc, sql_select.cc) were not precise.
For example, the tests for STRING_RESULT did not take into account that
the field can actually be enum or temporal and the operation behaviour
should actually be different from what is correct for a regular
character string data type.
So I ended up in 5 different implementations of can_optimize().
Field - for numeric and bit types
I also noticed some other optimizer bugs but not sure how to fix them
MDEV-6986 Bad results with join comparing INT and VARCHAR columns
I don't see how you can fix it. The correct fix would be to disable the
index in the second query and compare as doubles. But I could only
imagine how many applications it will break.
What do you suggest? Won't fix?
An idea (but I'm not sure):
comparing INT and VARCHAR as DECIMAL should give a more precise result,
and the index should still be usable. INT linearly and distinctly maps
to DECIMAL (unlike DOUBLE).
MDEV-6969 Bad results with joins comparing DOUBLE to BIGINT/DECIMAL columns
This one, I think, we can safely fix.
MDEV-6993 Bad results with join comparing DECIMAL and ENUM/SET columns
And this one.
Also, I found some other problems (not related to optimizer):
MDEV-6973 XOR aggregates argument collations
MDEV-7005 NULLIF does not work as documented
Not sure which version to fix them in.
10.0 looks ok. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511002.91/warc/CC-MAIN-20231002164819-20231002194819-00828.warc.gz | CC-MAIN-2023-40 | 3,299 | 59 |
https://aymericbeaumet.me/ | code | Edit (March 5th 2015): the frontend community has evolved in the last few months and tends to be less hostile to the CommonsJS style (e.g.: Angular is now available on npm). This article has been rewritten accordingly.
AngularJS is a frontend framework developed by folks at Google. It allows to build advanced web applications in a modular way by splitting the code base into small components. This brings a lot of advantages like a clean separation of concerns and an easier testability.
package.json, Browserify enables to require
node_modules in the build. This allows to rely on npm as a package manager for frontend dependencies, where Bower or Component would have been usually used.
When I first heard about Browserify, I immediately thought the modularity it brings would be really nice to build AngularJS applications. And it actually is. However they are not a perfect match by now, and some drawbacks need to be fixed.
This article presents a solution to structure an AngularJS application using Browserify. It covers the use of non-CommonJS modules as dependencies. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459214.39/warc/CC-MAIN-20151124205419-00108-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 1,078 | 6 |
http://wdisneysecrets.com/forums/archive/index.php/t-14033.html | code | Disney & Orlando Secrets
Welcome to Disney Secrets!
View Full Version :
20-06-2010, 08:10 AM
has anyone been to a concert recently saw willie nelson in glasgow he was great must have spent 15 minutes signing autographs at end of show:wiggle:
Powered by vBulletin® Version 4.2.3 Beta 4 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved. | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049273946.43/warc/CC-MAIN-20160524002113-00065-ip-10-185-217-139.ec2.internal.warc.gz | CC-MAIN-2016-22 | 350 | 6 |
https://blogs.gnome.org/gnomg/2013/03/06/come-speak-at-gnome-asia-in-seoul/ | code | Come speak at GNOME.Asia in Seoul!March 6, 2013 2:21 pm Uncategorized
All Gangnam Style jokes aside, I wanted to remind you that the 2013 GNOME.Asia summit has its Call for Papers deadline coming up on March 8. I’m really hoping to make it to South Korea this time because I know how awesome the summit has been in recent years! On top of that, there’s a lot of exciting things going on in free software in South Korea, there’s a solid GNOME team based there and they’re providing excellent leadership for the conference! On top of that, Max and the usual GNOME.Asia contributors continue to provide impressive dedication and enthusiasm.
Don’t forget to submit your proposal at http://2013.gnome.asia/cfp/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00075-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 715 | 3 |
https://engineering.buffalo.edu/computer-science-engineering/research/research-areas/theory/computer-security-and-cryptography.html | code | Computer scientists introduce innovative new work at annual conferences. The Computer Security and Cryptography research community expands the state of the art at these, the field's most prestigious and selective conferences:
ACM Computers and Communications Security (CCS)
European Cryptology Conference (Eurocrypt)
European Symposium on Research in Computer Security (ESORICS)
IEEE Symposium on Security and Privacy (Oakland)
International Cryptology Conference (Crypto)
Network and Distributed System Security Symposium (NDSS)
Focuses on applied cryptography, authentication, software and system security, threat modeling, anomaly detection, wireless security, cloud security, human-centered security, differential privacy and empirical cyber security. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00655.warc.gz | CC-MAIN-2023-14 | 755 | 8 |
https://leaddev.com/leadership-skills/leading-engineers-when-you-arent-one-yourself | code | Can a lack of technical expertise be an asset?
Several years ago, I was recruited for a role in engineering leadership at an established software company. This was curious, because although I’d worked in many roles in tech organizations and had been leading people for a long time, I was not a programmer. But the company needed somebody with solid management experience and an ability to lead a young satellite office; plus, in an org with separate tracks for individual contributors (ICs) and managers, I would be expected to work with senior engineers, not be one. I thought I could probably bring useful skills to the team, but I was actively concerned I’d be laughed out of the company by the actual programmers.
In fact, I wound up developing some of the most mutually satisfying internal partnerships of my career. I’m no longer at that company, but I’ve stayed in engineering leadership. Here are some of the things I learned in that role that might be helpful if you, too, find yourself leading people with more technical expertise than you have.
Software development involves a lot more than coding
This is beyond obvious. You also need design chops! Product vision! Somebody has to sell the stuff! But beyond all those key functional skills, you also need people who have seen the common mistakes software companies make – and some of the possible paths to success. That perspective is key, and surprisingly rare, when it comes to organizing people so that you actually ship things customers want.
For example, one of the most common mistakes companies make is setting software launch deadlines far in the future and insisting on a fixed set of specific, complex features (or, more commonly, an ever-increasing set of such features on that deadline). In a best-case scenario, you meet the deadline but ship something customers don’t use, and you seriously burn out the team along the way. In the worst case, you never ship anything at all, and you seriously burn out the team along the way.
You don’t need coding skills to identify this losing pattern and suggest other ways your teams can approach their work (two possible winners: medium-length deadlines with a variable scope of work; or very short deadlines with a very limited scope of work). Convincing organizations to work differently is hard, and it’s helpful if you can explain why software projects with a fixed complex scope and fixed long deadlines will fail. But that requires understanding software development, not code. If you’re able to influence your organization to work in more productive ways, you’ll be enormously valued by engineering teams and the business at large.
Lack of technical expertise can be useful
First, being a non-programmer helps reduce confusion about roles, and it can give ICs a chance to grow as tech leads. But there are more subtle benefits, too. One handy thing about having limited coding knowledge is that nobody expects you to know much about technology, and you can ask anything at all. That’s cool because not only do you get to learn, but it also gives engineers a chance to teach you things. In an IC/manager relationship, the manager typically has more explicit power in the org chart. By giving ICs a chance to gain status as teachers, you can balance the relationship a bit which creates fertile ground for partnering on day-to-day org strategy and on bigger decisions.
At my last company, a series of reorgs left several internally-focused teams buried in groups whose customers were external. The structure made it hard for them to work effectively, which I recognized not because I was reviewing pull requests (I didn’t even have a GitHub account), but because I could see they shared a set of communication problems. I proposed that we form a new group to house those teams, that we could experiment with serving our coworkers in new ways, and that I could lead the group. The tech leads on those teams were comfortable with me in that role, even though among senior directors at the company, I had the least direct technical experience.
Among the ways I’d developed relationships with those tech leads is that each of them had answered my technical questions over the previous years and had dedicated time to teaching me about our systems. They knew that I trusted their judgment as engineers and that I wouldn’t interfere with their decisions. They also knew, in part from seeing the ways I’d used the things they’d taught me, that I had complementary skills to bring to the group – things like helping them align technical proposals with company goals.
Being technical isn’t a thing
In the weeks before starting that first engineering leadership job, I took a lot of friends out for coffee and asked them all for advice about managing engineers. I got plenty of good tips that basically boiled down to: engineers are people, too, so manage them like anyone else. That was useful to hear.
But much more intriguing was the comment a friend in DevRel at Google made: ‘Being technical isn’t a thing, so don’t worry about that.’ She said it with such confidence, I had to reevaluate the common idea that there are categories of people, those technical and those non-technical. I’ve come to realize that what she meant is that technical expertise is a thing that people develop, not an inherent quality people have. Moreover, expertise is a spectrum – a non-linear one at that – and there isn’t a point at which you’re either technical or non-technical. (Not to mention that it’s counterproductive to discount other kinds of expertise in organizations that need a range of skills.)
Real talk: it’s one thing to understand that idea, and it’s another thing to internalize it. When I left that job for another engineering leadership role, I asked a lot of the people I’d been working with if they had advice for me in my new job. One principal, two mid-levels, and one early-career engineer all separately said to me, ‘You don’t need to worry about being technical enough. You’re plenty technical.’ I was taken aback.
These were people who’d reported to me and had to teach me very basic things, like how a monolith is different from service-oriented architecture, or what React is. And yet, they didn’t think my lack of technical depth was a problem. That didn’t mean it was time for me to stop learning. Instead, it meant that I could approach conversations confident that we could figure out software problems together, each bringing useful questions. It also meant I wasn’t personally lacking as a ‘non-technical’ leader, and it wasn’t helpful to broadcast any suggestion that I was.
Not every organization is a good fit for engineering leaders who aren’t programmers
When I was ready to leave that company, it took a while to find a new role. That’s in no small part because lots of companies want engineering managers to roll deep in code – either because that provides an efficiency in the way they’re organized, or because their culture values a specific kind of technical fluency. That’s fine; in fact, it was a strong signal that those companies weren’t a fit for me. I looked instead for companies that already had separate tracks for ICs and managers. That suggested they were sufficiently mature to value management skills. I also looked for companies that needed engineering managers to lead teams and build cross-functional relationships, especially with Product. Those are all responsibilities that benefit from experience in software – but not specifically with code – and are things I enjoy doing.
Since moving on, I’ve talked often with ICs who used to work with me. They commonly say that they hope we get to work together again. None of them have laughed me off. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511000.99/warc/CC-MAIN-20231002132844-20231002162844-00858.warc.gz | CC-MAIN-2023-40 | 7,802 | 19 |